00:00:00.002 Started by upstream project "autotest-nightly" build number 3877 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3257 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.132 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.133 The recommended git tool is: git 00:00:00.133 using credential 00000000-0000-0000-0000-000000000002 00:00:00.135 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.218 Fetching changes from the remote Git repository 00:00:00.223 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.269 Using shallow fetch with depth 1 00:00:00.269 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.269 > git --version # timeout=10 00:00:00.317 > git --version # 'git version 2.39.2' 00:00:00.317 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.335 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.335 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.601 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.612 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.626 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:06.626 > git config core.sparsecheckout # timeout=10 00:00:06.637 > git read-tree -mu HEAD # timeout=10 00:00:06.655 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:06.672 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:06.672 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:06.751 [Pipeline] Start of Pipeline 00:00:06.768 [Pipeline] library 00:00:06.771 Loading library shm_lib@master 00:00:06.771 Library shm_lib@master is cached. Copying from home. 00:00:06.786 [Pipeline] node 00:00:06.798 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.800 [Pipeline] { 00:00:06.809 [Pipeline] catchError 00:00:06.810 [Pipeline] { 00:00:06.822 [Pipeline] wrap 00:00:06.829 [Pipeline] { 00:00:06.836 [Pipeline] stage 00:00:06.837 [Pipeline] { (Prologue) 00:00:07.036 [Pipeline] sh 00:00:07.312 + logger -p user.info -t JENKINS-CI 00:00:07.333 [Pipeline] echo 00:00:07.334 Node: GP11 00:00:07.342 [Pipeline] sh 00:00:07.631 [Pipeline] setCustomBuildProperty 00:00:07.645 [Pipeline] echo 00:00:07.646 Cleanup processes 00:00:07.651 [Pipeline] sh 00:00:07.928 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.928 1147356 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.941 [Pipeline] sh 00:00:08.219 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.219 ++ grep -v 'sudo pgrep' 00:00:08.219 ++ awk '{print $1}' 00:00:08.219 + sudo kill -9 00:00:08.219 + true 00:00:08.229 [Pipeline] cleanWs 00:00:08.236 [WS-CLEANUP] Deleting project workspace... 00:00:08.236 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.241 [WS-CLEANUP] done 00:00:08.245 [Pipeline] setCustomBuildProperty 00:00:08.257 [Pipeline] sh 00:00:08.531 + sudo git config --global --replace-all safe.directory '*' 00:00:08.621 [Pipeline] httpRequest 00:00:08.642 [Pipeline] echo 00:00:08.644 Sorcerer 10.211.164.101 is alive 00:00:08.652 [Pipeline] httpRequest 00:00:08.656 HttpMethod: GET 00:00:08.656 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:08.657 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:08.658 Response Code: HTTP/1.1 200 OK 00:00:08.658 Success: Status code 200 is in the accepted range: 200,404 00:00:08.659 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:09.483 [Pipeline] sh 00:00:09.768 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:09.787 [Pipeline] httpRequest 00:00:09.808 [Pipeline] echo 00:00:09.810 Sorcerer 10.211.164.101 is alive 00:00:09.818 [Pipeline] httpRequest 00:00:09.822 HttpMethod: GET 00:00:09.823 URL: http://10.211.164.101/packages/spdk_968224f4625508c0012db59f92f718062c66a8c3.tar.gz 00:00:09.823 Sending request to url: http://10.211.164.101/packages/spdk_968224f4625508c0012db59f92f718062c66a8c3.tar.gz 00:00:09.835 Response Code: HTTP/1.1 200 OK 00:00:09.835 Success: Status code 200 is in the accepted range: 200,404 00:00:09.836 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_968224f4625508c0012db59f92f718062c66a8c3.tar.gz 00:00:44.379 [Pipeline] sh 00:00:44.666 + tar --no-same-owner -xf spdk_968224f4625508c0012db59f92f718062c66a8c3.tar.gz 00:00:47.995 [Pipeline] sh 00:00:48.278 + git -C spdk log --oneline -n5 00:00:48.278 968224f46 app/trace_record: add a optional option '-t' 00:00:48.278 d83ccf437 accel: clarify the usage of spdk_accel_sequence_abort() 00:00:48.278 f282c9958 doc/jsonrpc.md fix style issue 00:00:48.278 868be8ed2 iscs: chap mutual authentication should apply when configured. 00:00:48.278 16b33d51e iscsi: Authenticating discovery based on givven credentials. 00:00:48.291 [Pipeline] } 00:00:48.310 [Pipeline] // stage 00:00:48.321 [Pipeline] stage 00:00:48.323 [Pipeline] { (Prepare) 00:00:48.344 [Pipeline] writeFile 00:00:48.363 [Pipeline] sh 00:00:48.643 + logger -p user.info -t JENKINS-CI 00:00:48.656 [Pipeline] sh 00:00:48.932 + logger -p user.info -t JENKINS-CI 00:00:48.944 [Pipeline] sh 00:00:49.222 + cat autorun-spdk.conf 00:00:49.222 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.222 SPDK_TEST_NVMF=1 00:00:49.222 SPDK_TEST_NVME_CLI=1 00:00:49.222 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:49.222 SPDK_TEST_NVMF_NICS=e810 00:00:49.222 SPDK_RUN_ASAN=1 00:00:49.222 SPDK_RUN_UBSAN=1 00:00:49.222 NET_TYPE=phy 00:00:49.229 RUN_NIGHTLY=1 00:00:49.236 [Pipeline] readFile 00:00:49.269 [Pipeline] withEnv 00:00:49.272 [Pipeline] { 00:00:49.287 [Pipeline] sh 00:00:49.570 + set -ex 00:00:49.570 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:49.570 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:49.570 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.570 ++ SPDK_TEST_NVMF=1 00:00:49.570 ++ SPDK_TEST_NVME_CLI=1 00:00:49.570 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:49.570 ++ SPDK_TEST_NVMF_NICS=e810 00:00:49.570 ++ SPDK_RUN_ASAN=1 00:00:49.570 ++ SPDK_RUN_UBSAN=1 00:00:49.570 ++ NET_TYPE=phy 00:00:49.570 ++ RUN_NIGHTLY=1 00:00:49.570 + case $SPDK_TEST_NVMF_NICS in 00:00:49.570 + DRIVERS=ice 00:00:49.570 + [[ tcp == \r\d\m\a ]] 00:00:49.570 + [[ -n ice ]] 00:00:49.570 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:49.570 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:49.570 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:49.570 rmmod: ERROR: Module irdma is not currently loaded 00:00:49.570 rmmod: ERROR: Module i40iw is not currently loaded 00:00:49.570 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:49.570 + true 00:00:49.570 + for D in $DRIVERS 00:00:49.570 + sudo modprobe ice 00:00:49.570 + exit 0 00:00:49.581 [Pipeline] } 00:00:49.602 [Pipeline] // withEnv 00:00:49.608 [Pipeline] } 00:00:49.626 [Pipeline] // stage 00:00:49.637 [Pipeline] catchError 00:00:49.639 [Pipeline] { 00:00:49.657 [Pipeline] timeout 00:00:49.658 Timeout set to expire in 50 min 00:00:49.660 [Pipeline] { 00:00:49.677 [Pipeline] stage 00:00:49.679 [Pipeline] { (Tests) 00:00:49.698 [Pipeline] sh 00:00:49.981 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:49.981 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:49.981 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:49.981 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:49.981 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:49.981 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:49.981 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:49.981 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:49.981 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:49.981 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:49.981 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:49.981 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:49.981 + source /etc/os-release 00:00:49.981 ++ NAME='Fedora Linux' 00:00:49.981 ++ VERSION='38 (Cloud Edition)' 00:00:49.981 ++ ID=fedora 00:00:49.981 ++ VERSION_ID=38 00:00:49.981 ++ VERSION_CODENAME= 00:00:49.981 ++ PLATFORM_ID=platform:f38 00:00:49.981 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:49.981 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:49.981 ++ LOGO=fedora-logo-icon 00:00:49.981 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:49.981 ++ HOME_URL=https://fedoraproject.org/ 00:00:49.981 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:49.981 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:49.981 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:49.981 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:49.981 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:49.981 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:49.981 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:49.981 ++ SUPPORT_END=2024-05-14 00:00:49.981 ++ VARIANT='Cloud Edition' 00:00:49.981 ++ VARIANT_ID=cloud 00:00:49.981 + uname -a 00:00:49.981 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:49.981 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:50.919 Hugepages 00:00:50.919 node hugesize free / total 00:00:50.919 node0 1048576kB 0 / 0 00:00:50.919 node0 2048kB 0 / 0 00:00:50.919 node1 1048576kB 0 / 0 00:00:50.919 node1 2048kB 0 / 0 00:00:50.919 00:00:50.919 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:50.919 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:50.919 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:50.919 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:50.919 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:50.919 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:50.919 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:50.919 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:50.919 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:50.919 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:50.919 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:50.919 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:50.919 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:50.919 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:50.919 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:50.919 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:50.919 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:50.919 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:50.919 + rm -f /tmp/spdk-ld-path 00:00:50.919 + source autorun-spdk.conf 00:00:50.919 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.919 ++ SPDK_TEST_NVMF=1 00:00:50.919 ++ SPDK_TEST_NVME_CLI=1 00:00:50.919 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.919 ++ SPDK_TEST_NVMF_NICS=e810 00:00:50.919 ++ SPDK_RUN_ASAN=1 00:00:50.919 ++ SPDK_RUN_UBSAN=1 00:00:50.919 ++ NET_TYPE=phy 00:00:50.919 ++ RUN_NIGHTLY=1 00:00:50.919 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:50.919 + [[ -n '' ]] 00:00:50.919 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:50.919 + for M in /var/spdk/build-*-manifest.txt 00:00:50.919 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:50.919 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:50.919 + for M in /var/spdk/build-*-manifest.txt 00:00:50.919 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:50.919 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:50.919 ++ uname 00:00:50.919 + [[ Linux == \L\i\n\u\x ]] 00:00:50.919 + sudo dmesg -T 00:00:50.919 + sudo dmesg --clear 00:00:50.919 + dmesg_pid=1148064 00:00:50.919 + [[ Fedora Linux == FreeBSD ]] 00:00:50.919 + sudo dmesg -Tw 00:00:50.919 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:50.919 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:50.919 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:50.919 + [[ -x /usr/src/fio-static/fio ]] 00:00:50.919 + export FIO_BIN=/usr/src/fio-static/fio 00:00:50.919 + FIO_BIN=/usr/src/fio-static/fio 00:00:50.919 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:50.919 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:50.919 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:50.919 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:50.919 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:50.919 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:50.919 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:50.919 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:50.919 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:50.919 Test configuration: 00:00:50.919 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.919 SPDK_TEST_NVMF=1 00:00:50.919 SPDK_TEST_NVME_CLI=1 00:00:50.919 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.919 SPDK_TEST_NVMF_NICS=e810 00:00:50.919 SPDK_RUN_ASAN=1 00:00:50.919 SPDK_RUN_UBSAN=1 00:00:50.919 NET_TYPE=phy 00:00:51.179 RUN_NIGHTLY=1 14:03:00 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:51.179 14:03:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:51.179 14:03:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:51.179 14:03:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:51.179 14:03:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:51.179 14:03:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:51.179 14:03:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:51.179 14:03:00 -- paths/export.sh@5 -- $ export PATH 00:00:51.179 14:03:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:51.179 14:03:00 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:51.179 14:03:00 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:51.179 14:03:00 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720612980.XXXXXX 00:00:51.179 14:03:00 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720612980.wWJEj0 00:00:51.179 14:03:00 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:51.179 14:03:00 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:51.179 14:03:00 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:51.179 14:03:00 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:51.179 14:03:00 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:51.179 14:03:00 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:51.179 14:03:00 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:51.179 14:03:00 -- common/autotest_common.sh@10 -- $ set +x 00:00:51.179 14:03:00 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:00:51.179 14:03:00 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:51.179 14:03:00 -- pm/common@17 -- $ local monitor 00:00:51.179 14:03:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:51.179 14:03:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:51.179 14:03:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:51.179 14:03:00 -- pm/common@21 -- $ date +%s 00:00:51.179 14:03:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:51.179 14:03:00 -- pm/common@21 -- $ date +%s 00:00:51.179 14:03:00 -- pm/common@25 -- $ sleep 1 00:00:51.179 14:03:00 -- pm/common@21 -- $ date +%s 00:00:51.179 14:03:00 -- pm/common@21 -- $ date +%s 00:00:51.180 14:03:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720612980 00:00:51.180 14:03:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720612980 00:00:51.180 14:03:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720612980 00:00:51.180 14:03:00 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720612980 00:00:51.180 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720612980_collect-vmstat.pm.log 00:00:51.180 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720612980_collect-cpu-load.pm.log 00:00:51.180 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720612980_collect-cpu-temp.pm.log 00:00:51.180 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720612980_collect-bmc-pm.bmc.pm.log 00:00:52.117 14:03:01 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:52.117 14:03:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:52.117 14:03:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:52.117 14:03:01 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:52.117 14:03:01 -- spdk/autobuild.sh@16 -- $ date -u 00:00:52.117 Wed Jul 10 12:03:01 PM UTC 2024 00:00:52.117 14:03:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:52.117 v24.09-pre-193-g968224f46 00:00:52.117 14:03:01 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:00:52.117 14:03:01 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:00:52.117 14:03:01 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:52.117 14:03:01 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:52.117 14:03:01 -- common/autotest_common.sh@10 -- $ set +x 00:00:52.117 ************************************ 00:00:52.117 START TEST asan 00:00:52.117 ************************************ 00:00:52.117 14:03:01 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:00:52.117 using asan 00:00:52.117 00:00:52.118 real 0m0.000s 00:00:52.118 user 0m0.000s 00:00:52.118 sys 0m0.000s 00:00:52.118 14:03:01 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:52.118 14:03:01 asan -- common/autotest_common.sh@10 -- $ set +x 00:00:52.118 ************************************ 00:00:52.118 END TEST asan 00:00:52.118 ************************************ 00:00:52.118 14:03:01 -- common/autotest_common.sh@1142 -- $ return 0 00:00:52.118 14:03:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:52.118 14:03:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:52.118 14:03:01 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:52.118 14:03:01 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:52.118 14:03:01 -- common/autotest_common.sh@10 -- $ set +x 00:00:52.118 ************************************ 00:00:52.118 START TEST ubsan 00:00:52.118 ************************************ 00:00:52.118 14:03:01 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:52.118 using ubsan 00:00:52.118 00:00:52.118 real 0m0.000s 00:00:52.118 user 0m0.000s 00:00:52.118 sys 0m0.000s 00:00:52.118 14:03:01 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:52.118 14:03:01 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:52.118 ************************************ 00:00:52.118 END TEST ubsan 00:00:52.118 ************************************ 00:00:52.118 14:03:01 -- common/autotest_common.sh@1142 -- $ return 0 00:00:52.118 14:03:01 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:52.118 14:03:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:52.118 14:03:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:52.118 14:03:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:52.118 14:03:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:52.118 14:03:01 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:52.118 14:03:01 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:52.118 14:03:01 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:52.118 14:03:01 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:00:52.376 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:52.376 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:52.636 Using 'verbs' RDMA provider 00:01:03.176 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:13.160 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:13.160 Creating mk/config.mk...done. 00:01:13.160 Creating mk/cc.flags.mk...done. 00:01:13.160 Type 'make' to build. 00:01:13.160 14:03:21 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:13.160 14:03:21 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:13.160 14:03:21 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:13.160 14:03:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.160 ************************************ 00:01:13.160 START TEST make 00:01:13.160 ************************************ 00:01:13.160 14:03:22 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:13.160 make[1]: Nothing to be done for 'all'. 00:01:21.300 The Meson build system 00:01:21.300 Version: 1.3.1 00:01:21.300 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:21.300 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:21.300 Build type: native build 00:01:21.300 Program cat found: YES (/usr/bin/cat) 00:01:21.300 Project name: DPDK 00:01:21.300 Project version: 24.03.0 00:01:21.300 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:21.300 C linker for the host machine: cc ld.bfd 2.39-16 00:01:21.300 Host machine cpu family: x86_64 00:01:21.300 Host machine cpu: x86_64 00:01:21.300 Message: ## Building in Developer Mode ## 00:01:21.300 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:21.300 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:21.300 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:21.300 Program python3 found: YES (/usr/bin/python3) 00:01:21.300 Program cat found: YES (/usr/bin/cat) 00:01:21.300 Compiler for C supports arguments -march=native: YES 00:01:21.300 Checking for size of "void *" : 8 00:01:21.300 Checking for size of "void *" : 8 (cached) 00:01:21.300 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:21.300 Library m found: YES 00:01:21.300 Library numa found: YES 00:01:21.300 Has header "numaif.h" : YES 00:01:21.300 Library fdt found: NO 00:01:21.300 Library execinfo found: NO 00:01:21.300 Has header "execinfo.h" : YES 00:01:21.300 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:21.300 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:21.300 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:21.300 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:21.300 Run-time dependency openssl found: YES 3.0.9 00:01:21.300 Run-time dependency libpcap found: YES 1.10.4 00:01:21.300 Has header "pcap.h" with dependency libpcap: YES 00:01:21.300 Compiler for C supports arguments -Wcast-qual: YES 00:01:21.300 Compiler for C supports arguments -Wdeprecated: YES 00:01:21.300 Compiler for C supports arguments -Wformat: YES 00:01:21.300 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:21.300 Compiler for C supports arguments -Wformat-security: NO 00:01:21.300 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:21.300 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:21.300 Compiler for C supports arguments -Wnested-externs: YES 00:01:21.300 Compiler for C supports arguments -Wold-style-definition: YES 00:01:21.300 Compiler for C supports arguments -Wpointer-arith: YES 00:01:21.300 Compiler for C supports arguments -Wsign-compare: YES 00:01:21.300 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:21.300 Compiler for C supports arguments -Wundef: YES 00:01:21.300 Compiler for C supports arguments -Wwrite-strings: YES 00:01:21.300 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:21.300 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:21.300 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:21.300 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:21.300 Program objdump found: YES (/usr/bin/objdump) 00:01:21.300 Compiler for C supports arguments -mavx512f: YES 00:01:21.300 Checking if "AVX512 checking" compiles: YES 00:01:21.300 Fetching value of define "__SSE4_2__" : 1 00:01:21.300 Fetching value of define "__AES__" : 1 00:01:21.300 Fetching value of define "__AVX__" : 1 00:01:21.300 Fetching value of define "__AVX2__" : (undefined) 00:01:21.300 Fetching value of define "__AVX512BW__" : (undefined) 00:01:21.300 Fetching value of define "__AVX512CD__" : (undefined) 00:01:21.300 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:21.300 Fetching value of define "__AVX512F__" : (undefined) 00:01:21.300 Fetching value of define "__AVX512VL__" : (undefined) 00:01:21.300 Fetching value of define "__PCLMUL__" : 1 00:01:21.300 Fetching value of define "__RDRND__" : 1 00:01:21.300 Fetching value of define "__RDSEED__" : (undefined) 00:01:21.300 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:21.300 Fetching value of define "__znver1__" : (undefined) 00:01:21.300 Fetching value of define "__znver2__" : (undefined) 00:01:21.300 Fetching value of define "__znver3__" : (undefined) 00:01:21.300 Fetching value of define "__znver4__" : (undefined) 00:01:21.300 Library asan found: YES 00:01:21.300 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:21.300 Message: lib/log: Defining dependency "log" 00:01:21.300 Message: lib/kvargs: Defining dependency "kvargs" 00:01:21.300 Message: lib/telemetry: Defining dependency "telemetry" 00:01:21.300 Library rt found: YES 00:01:21.300 Checking for function "getentropy" : NO 00:01:21.300 Message: lib/eal: Defining dependency "eal" 00:01:21.300 Message: lib/ring: Defining dependency "ring" 00:01:21.300 Message: lib/rcu: Defining dependency "rcu" 00:01:21.300 Message: lib/mempool: Defining dependency "mempool" 00:01:21.300 Message: lib/mbuf: Defining dependency "mbuf" 00:01:21.300 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:21.300 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:21.300 Compiler for C supports arguments -mpclmul: YES 00:01:21.300 Compiler for C supports arguments -maes: YES 00:01:21.300 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:21.300 Compiler for C supports arguments -mavx512bw: YES 00:01:21.300 Compiler for C supports arguments -mavx512dq: YES 00:01:21.300 Compiler for C supports arguments -mavx512vl: YES 00:01:21.300 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:21.300 Compiler for C supports arguments -mavx2: YES 00:01:21.300 Compiler for C supports arguments -mavx: YES 00:01:21.300 Message: lib/net: Defining dependency "net" 00:01:21.300 Message: lib/meter: Defining dependency "meter" 00:01:21.300 Message: lib/ethdev: Defining dependency "ethdev" 00:01:21.300 Message: lib/pci: Defining dependency "pci" 00:01:21.300 Message: lib/cmdline: Defining dependency "cmdline" 00:01:21.300 Message: lib/hash: Defining dependency "hash" 00:01:21.300 Message: lib/timer: Defining dependency "timer" 00:01:21.300 Message: lib/compressdev: Defining dependency "compressdev" 00:01:21.300 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:21.300 Message: lib/dmadev: Defining dependency "dmadev" 00:01:21.300 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:21.300 Message: lib/power: Defining dependency "power" 00:01:21.300 Message: lib/reorder: Defining dependency "reorder" 00:01:21.300 Message: lib/security: Defining dependency "security" 00:01:21.300 Has header "linux/userfaultfd.h" : YES 00:01:21.300 Has header "linux/vduse.h" : YES 00:01:21.300 Message: lib/vhost: Defining dependency "vhost" 00:01:21.300 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:21.300 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:21.300 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:21.300 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:21.300 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:21.300 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:21.300 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:21.300 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:21.300 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:21.300 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:21.300 Program doxygen found: YES (/usr/bin/doxygen) 00:01:21.300 Configuring doxy-api-html.conf using configuration 00:01:21.300 Configuring doxy-api-man.conf using configuration 00:01:21.300 Program mandb found: YES (/usr/bin/mandb) 00:01:21.300 Program sphinx-build found: NO 00:01:21.300 Configuring rte_build_config.h using configuration 00:01:21.300 Message: 00:01:21.300 ================= 00:01:21.300 Applications Enabled 00:01:21.300 ================= 00:01:21.300 00:01:21.300 apps: 00:01:21.300 00:01:21.300 00:01:21.300 Message: 00:01:21.300 ================= 00:01:21.300 Libraries Enabled 00:01:21.300 ================= 00:01:21.300 00:01:21.300 libs: 00:01:21.300 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:21.300 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:21.300 cryptodev, dmadev, power, reorder, security, vhost, 00:01:21.300 00:01:21.300 Message: 00:01:21.300 =============== 00:01:21.300 Drivers Enabled 00:01:21.300 =============== 00:01:21.300 00:01:21.300 common: 00:01:21.300 00:01:21.300 bus: 00:01:21.300 pci, vdev, 00:01:21.300 mempool: 00:01:21.300 ring, 00:01:21.300 dma: 00:01:21.300 00:01:21.300 net: 00:01:21.301 00:01:21.301 crypto: 00:01:21.301 00:01:21.301 compress: 00:01:21.301 00:01:21.301 vdpa: 00:01:21.301 00:01:21.301 00:01:21.301 Message: 00:01:21.301 ================= 00:01:21.301 Content Skipped 00:01:21.301 ================= 00:01:21.301 00:01:21.301 apps: 00:01:21.301 dumpcap: explicitly disabled via build config 00:01:21.301 graph: explicitly disabled via build config 00:01:21.301 pdump: explicitly disabled via build config 00:01:21.301 proc-info: explicitly disabled via build config 00:01:21.301 test-acl: explicitly disabled via build config 00:01:21.301 test-bbdev: explicitly disabled via build config 00:01:21.301 test-cmdline: explicitly disabled via build config 00:01:21.301 test-compress-perf: explicitly disabled via build config 00:01:21.301 test-crypto-perf: explicitly disabled via build config 00:01:21.301 test-dma-perf: explicitly disabled via build config 00:01:21.301 test-eventdev: explicitly disabled via build config 00:01:21.301 test-fib: explicitly disabled via build config 00:01:21.301 test-flow-perf: explicitly disabled via build config 00:01:21.301 test-gpudev: explicitly disabled via build config 00:01:21.301 test-mldev: explicitly disabled via build config 00:01:21.301 test-pipeline: explicitly disabled via build config 00:01:21.301 test-pmd: explicitly disabled via build config 00:01:21.301 test-regex: explicitly disabled via build config 00:01:21.301 test-sad: explicitly disabled via build config 00:01:21.301 test-security-perf: explicitly disabled via build config 00:01:21.301 00:01:21.301 libs: 00:01:21.301 argparse: explicitly disabled via build config 00:01:21.301 metrics: explicitly disabled via build config 00:01:21.301 acl: explicitly disabled via build config 00:01:21.301 bbdev: explicitly disabled via build config 00:01:21.301 bitratestats: explicitly disabled via build config 00:01:21.301 bpf: explicitly disabled via build config 00:01:21.301 cfgfile: explicitly disabled via build config 00:01:21.301 distributor: explicitly disabled via build config 00:01:21.301 efd: explicitly disabled via build config 00:01:21.301 eventdev: explicitly disabled via build config 00:01:21.301 dispatcher: explicitly disabled via build config 00:01:21.301 gpudev: explicitly disabled via build config 00:01:21.301 gro: explicitly disabled via build config 00:01:21.301 gso: explicitly disabled via build config 00:01:21.301 ip_frag: explicitly disabled via build config 00:01:21.301 jobstats: explicitly disabled via build config 00:01:21.301 latencystats: explicitly disabled via build config 00:01:21.301 lpm: explicitly disabled via build config 00:01:21.301 member: explicitly disabled via build config 00:01:21.301 pcapng: explicitly disabled via build config 00:01:21.301 rawdev: explicitly disabled via build config 00:01:21.301 regexdev: explicitly disabled via build config 00:01:21.301 mldev: explicitly disabled via build config 00:01:21.301 rib: explicitly disabled via build config 00:01:21.301 sched: explicitly disabled via build config 00:01:21.301 stack: explicitly disabled via build config 00:01:21.301 ipsec: explicitly disabled via build config 00:01:21.301 pdcp: explicitly disabled via build config 00:01:21.301 fib: explicitly disabled via build config 00:01:21.301 port: explicitly disabled via build config 00:01:21.301 pdump: explicitly disabled via build config 00:01:21.301 table: explicitly disabled via build config 00:01:21.301 pipeline: explicitly disabled via build config 00:01:21.301 graph: explicitly disabled via build config 00:01:21.301 node: explicitly disabled via build config 00:01:21.301 00:01:21.301 drivers: 00:01:21.301 common/cpt: not in enabled drivers build config 00:01:21.301 common/dpaax: not in enabled drivers build config 00:01:21.301 common/iavf: not in enabled drivers build config 00:01:21.301 common/idpf: not in enabled drivers build config 00:01:21.301 common/ionic: not in enabled drivers build config 00:01:21.301 common/mvep: not in enabled drivers build config 00:01:21.301 common/octeontx: not in enabled drivers build config 00:01:21.301 bus/auxiliary: not in enabled drivers build config 00:01:21.301 bus/cdx: not in enabled drivers build config 00:01:21.301 bus/dpaa: not in enabled drivers build config 00:01:21.301 bus/fslmc: not in enabled drivers build config 00:01:21.301 bus/ifpga: not in enabled drivers build config 00:01:21.301 bus/platform: not in enabled drivers build config 00:01:21.301 bus/uacce: not in enabled drivers build config 00:01:21.301 bus/vmbus: not in enabled drivers build config 00:01:21.301 common/cnxk: not in enabled drivers build config 00:01:21.301 common/mlx5: not in enabled drivers build config 00:01:21.301 common/nfp: not in enabled drivers build config 00:01:21.301 common/nitrox: not in enabled drivers build config 00:01:21.301 common/qat: not in enabled drivers build config 00:01:21.301 common/sfc_efx: not in enabled drivers build config 00:01:21.301 mempool/bucket: not in enabled drivers build config 00:01:21.301 mempool/cnxk: not in enabled drivers build config 00:01:21.301 mempool/dpaa: not in enabled drivers build config 00:01:21.301 mempool/dpaa2: not in enabled drivers build config 00:01:21.301 mempool/octeontx: not in enabled drivers build config 00:01:21.301 mempool/stack: not in enabled drivers build config 00:01:21.301 dma/cnxk: not in enabled drivers build config 00:01:21.301 dma/dpaa: not in enabled drivers build config 00:01:21.301 dma/dpaa2: not in enabled drivers build config 00:01:21.301 dma/hisilicon: not in enabled drivers build config 00:01:21.301 dma/idxd: not in enabled drivers build config 00:01:21.301 dma/ioat: not in enabled drivers build config 00:01:21.301 dma/skeleton: not in enabled drivers build config 00:01:21.301 net/af_packet: not in enabled drivers build config 00:01:21.301 net/af_xdp: not in enabled drivers build config 00:01:21.301 net/ark: not in enabled drivers build config 00:01:21.301 net/atlantic: not in enabled drivers build config 00:01:21.301 net/avp: not in enabled drivers build config 00:01:21.301 net/axgbe: not in enabled drivers build config 00:01:21.301 net/bnx2x: not in enabled drivers build config 00:01:21.301 net/bnxt: not in enabled drivers build config 00:01:21.301 net/bonding: not in enabled drivers build config 00:01:21.301 net/cnxk: not in enabled drivers build config 00:01:21.301 net/cpfl: not in enabled drivers build config 00:01:21.301 net/cxgbe: not in enabled drivers build config 00:01:21.301 net/dpaa: not in enabled drivers build config 00:01:21.301 net/dpaa2: not in enabled drivers build config 00:01:21.301 net/e1000: not in enabled drivers build config 00:01:21.301 net/ena: not in enabled drivers build config 00:01:21.301 net/enetc: not in enabled drivers build config 00:01:21.301 net/enetfec: not in enabled drivers build config 00:01:21.301 net/enic: not in enabled drivers build config 00:01:21.301 net/failsafe: not in enabled drivers build config 00:01:21.301 net/fm10k: not in enabled drivers build config 00:01:21.301 net/gve: not in enabled drivers build config 00:01:21.301 net/hinic: not in enabled drivers build config 00:01:21.301 net/hns3: not in enabled drivers build config 00:01:21.301 net/i40e: not in enabled drivers build config 00:01:21.301 net/iavf: not in enabled drivers build config 00:01:21.301 net/ice: not in enabled drivers build config 00:01:21.301 net/idpf: not in enabled drivers build config 00:01:21.301 net/igc: not in enabled drivers build config 00:01:21.301 net/ionic: not in enabled drivers build config 00:01:21.301 net/ipn3ke: not in enabled drivers build config 00:01:21.301 net/ixgbe: not in enabled drivers build config 00:01:21.301 net/mana: not in enabled drivers build config 00:01:21.301 net/memif: not in enabled drivers build config 00:01:21.301 net/mlx4: not in enabled drivers build config 00:01:21.301 net/mlx5: not in enabled drivers build config 00:01:21.301 net/mvneta: not in enabled drivers build config 00:01:21.301 net/mvpp2: not in enabled drivers build config 00:01:21.301 net/netvsc: not in enabled drivers build config 00:01:21.301 net/nfb: not in enabled drivers build config 00:01:21.301 net/nfp: not in enabled drivers build config 00:01:21.301 net/ngbe: not in enabled drivers build config 00:01:21.301 net/null: not in enabled drivers build config 00:01:21.301 net/octeontx: not in enabled drivers build config 00:01:21.301 net/octeon_ep: not in enabled drivers build config 00:01:21.301 net/pcap: not in enabled drivers build config 00:01:21.301 net/pfe: not in enabled drivers build config 00:01:21.301 net/qede: not in enabled drivers build config 00:01:21.302 net/ring: not in enabled drivers build config 00:01:21.302 net/sfc: not in enabled drivers build config 00:01:21.302 net/softnic: not in enabled drivers build config 00:01:21.302 net/tap: not in enabled drivers build config 00:01:21.302 net/thunderx: not in enabled drivers build config 00:01:21.302 net/txgbe: not in enabled drivers build config 00:01:21.302 net/vdev_netvsc: not in enabled drivers build config 00:01:21.302 net/vhost: not in enabled drivers build config 00:01:21.302 net/virtio: not in enabled drivers build config 00:01:21.302 net/vmxnet3: not in enabled drivers build config 00:01:21.302 raw/*: missing internal dependency, "rawdev" 00:01:21.302 crypto/armv8: not in enabled drivers build config 00:01:21.302 crypto/bcmfs: not in enabled drivers build config 00:01:21.302 crypto/caam_jr: not in enabled drivers build config 00:01:21.302 crypto/ccp: not in enabled drivers build config 00:01:21.302 crypto/cnxk: not in enabled drivers build config 00:01:21.302 crypto/dpaa_sec: not in enabled drivers build config 00:01:21.302 crypto/dpaa2_sec: not in enabled drivers build config 00:01:21.302 crypto/ipsec_mb: not in enabled drivers build config 00:01:21.302 crypto/mlx5: not in enabled drivers build config 00:01:21.302 crypto/mvsam: not in enabled drivers build config 00:01:21.302 crypto/nitrox: not in enabled drivers build config 00:01:21.302 crypto/null: not in enabled drivers build config 00:01:21.302 crypto/octeontx: not in enabled drivers build config 00:01:21.302 crypto/openssl: not in enabled drivers build config 00:01:21.302 crypto/scheduler: not in enabled drivers build config 00:01:21.302 crypto/uadk: not in enabled drivers build config 00:01:21.302 crypto/virtio: not in enabled drivers build config 00:01:21.302 compress/isal: not in enabled drivers build config 00:01:21.302 compress/mlx5: not in enabled drivers build config 00:01:21.302 compress/nitrox: not in enabled drivers build config 00:01:21.302 compress/octeontx: not in enabled drivers build config 00:01:21.302 compress/zlib: not in enabled drivers build config 00:01:21.302 regex/*: missing internal dependency, "regexdev" 00:01:21.302 ml/*: missing internal dependency, "mldev" 00:01:21.302 vdpa/ifc: not in enabled drivers build config 00:01:21.302 vdpa/mlx5: not in enabled drivers build config 00:01:21.302 vdpa/nfp: not in enabled drivers build config 00:01:21.302 vdpa/sfc: not in enabled drivers build config 00:01:21.302 event/*: missing internal dependency, "eventdev" 00:01:21.302 baseband/*: missing internal dependency, "bbdev" 00:01:21.302 gpu/*: missing internal dependency, "gpudev" 00:01:21.302 00:01:21.302 00:01:21.561 Build targets in project: 85 00:01:21.561 00:01:21.561 DPDK 24.03.0 00:01:21.561 00:01:21.561 User defined options 00:01:21.561 buildtype : debug 00:01:21.561 default_library : shared 00:01:21.561 libdir : lib 00:01:21.561 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:21.561 b_sanitize : address 00:01:21.561 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:21.561 c_link_args : 00:01:21.561 cpu_instruction_set: native 00:01:21.561 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:21.561 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:21.561 enable_docs : false 00:01:21.561 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:21.561 enable_kmods : false 00:01:21.561 max_lcores : 128 00:01:21.561 tests : false 00:01:21.561 00:01:21.561 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:22.132 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:22.132 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:22.132 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:22.132 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:22.132 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:22.132 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:22.132 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:22.132 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:22.132 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:22.132 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:22.132 [10/268] Linking static target lib/librte_kvargs.a 00:01:22.132 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:22.132 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:22.393 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:22.393 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:22.393 [15/268] Linking static target lib/librte_log.a 00:01:22.393 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:22.967 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.967 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:22.967 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:22.967 [20/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:22.967 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:22.967 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:22.967 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:22.967 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:22.967 [25/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:22.967 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:22.967 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:22.967 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:22.967 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:22.967 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:23.230 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:23.230 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:23.230 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:23.230 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:23.230 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:23.230 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:23.230 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:23.230 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:23.230 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:23.230 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:23.230 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:23.230 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:23.230 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:23.230 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:23.230 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:23.230 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:23.230 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:23.230 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:23.230 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:23.230 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:23.230 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:23.230 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:23.230 [53/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:23.230 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:23.230 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:23.230 [56/268] Linking static target lib/librte_telemetry.a 00:01:23.230 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:23.494 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:23.494 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:23.494 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:23.494 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:23.494 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:23.494 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:23.494 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:23.494 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.755 [66/268] Linking target lib/librte_log.so.24.1 00:01:23.755 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:23.755 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:24.017 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:24.017 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:24.017 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:24.017 [72/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:24.017 [73/268] Linking static target lib/librte_pci.a 00:01:24.017 [74/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:24.017 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:24.017 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:24.017 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:24.017 [78/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:24.017 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:24.017 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:24.017 [81/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:24.017 [82/268] Linking target lib/librte_kvargs.so.24.1 00:01:24.017 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:24.276 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:24.276 [85/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:24.276 [86/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:24.276 [87/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:24.276 [88/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:24.276 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:24.276 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:24.276 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:24.276 [92/268] Linking static target lib/librte_meter.a 00:01:24.276 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:24.276 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:24.276 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:24.276 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:24.276 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:24.276 [98/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:24.276 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:24.276 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:24.276 [101/268] Linking static target lib/librte_ring.a 00:01:24.276 [102/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:24.276 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:24.276 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:24.276 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:24.276 [106/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.276 [107/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:24.276 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:24.276 [109/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:24.538 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:24.538 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:24.538 [112/268] Linking target lib/librte_telemetry.so.24.1 00:01:24.538 [113/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:24.538 [114/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:24.538 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:24.538 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:24.538 [117/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:24.538 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:24.538 [119/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.538 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:24.538 [121/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:24.801 [122/268] Linking static target lib/librte_mempool.a 00:01:24.801 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:24.801 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:24.801 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:24.801 [126/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.801 [127/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:24.801 [128/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:24.801 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:24.801 [130/268] Linking static target lib/librte_rcu.a 00:01:24.801 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:24.801 [132/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:24.801 [133/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.059 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:25.059 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:25.059 [136/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:25.059 [137/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:25.059 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:25.059 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:25.059 [140/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:25.059 [141/268] Linking static target lib/librte_cmdline.a 00:01:25.059 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:25.319 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:25.319 [144/268] Linking static target lib/librte_eal.a 00:01:25.319 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:25.319 [146/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:25.319 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:25.319 [148/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:25.319 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:25.319 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:25.319 [151/268] Linking static target lib/librte_timer.a 00:01:25.319 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:25.319 [153/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.319 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:25.319 [155/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:25.319 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:25.583 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:25.583 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:25.583 [159/268] Linking static target lib/librte_dmadev.a 00:01:25.583 [160/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.851 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:25.851 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.851 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:25.851 [164/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:25.851 [165/268] Linking static target lib/librte_net.a 00:01:25.851 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:25.851 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:25.851 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:25.851 [169/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:25.851 [170/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:26.109 [171/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:26.109 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:26.109 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:26.109 [174/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.109 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:26.109 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:26.109 [177/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:26.109 [178/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:26.109 [179/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.109 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:26.109 [181/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.109 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:26.109 [183/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:26.109 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:26.109 [185/268] Linking static target lib/librte_power.a 00:01:26.109 [186/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:26.109 [187/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:26.367 [188/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:26.367 [189/268] Linking static target lib/librte_compressdev.a 00:01:26.367 [190/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:26.367 [191/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:26.367 [192/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:26.367 [193/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:26.367 [194/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:26.367 [195/268] Linking static target drivers/librte_bus_pci.a 00:01:26.367 [196/268] Linking static target lib/librte_hash.a 00:01:26.367 [197/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:26.367 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:26.367 [199/268] Linking static target drivers/librte_bus_vdev.a 00:01:26.626 [200/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:26.627 [201/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:26.627 [202/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:26.627 [203/268] Linking static target lib/librte_reorder.a 00:01:26.627 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:26.627 [205/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.627 [206/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.627 [207/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.627 [208/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:26.885 [209/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:26.885 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:26.885 [211/268] Linking static target drivers/librte_mempool_ring.a 00:01:26.885 [212/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.885 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.885 [214/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:26.885 [215/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.451 [216/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:27.451 [217/268] Linking static target lib/librte_security.a 00:01:28.018 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.018 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:28.584 [220/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:28.584 [221/268] Linking static target lib/librte_mbuf.a 00:01:28.842 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:28.842 [223/268] Linking static target lib/librte_cryptodev.a 00:01:29.100 [224/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.036 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:30.036 [226/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.036 [227/268] Linking static target lib/librte_ethdev.a 00:01:30.970 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.970 [229/268] Linking target lib/librte_eal.so.24.1 00:01:31.228 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:31.228 [231/268] Linking target lib/librte_meter.so.24.1 00:01:31.228 [232/268] Linking target lib/librte_pci.so.24.1 00:01:31.228 [233/268] Linking target lib/librte_ring.so.24.1 00:01:31.228 [234/268] Linking target lib/librte_timer.so.24.1 00:01:31.228 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:31.228 [236/268] Linking target lib/librte_dmadev.so.24.1 00:01:31.486 [237/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:31.486 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:31.486 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:31.486 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:31.486 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:31.486 [242/268] Linking target lib/librte_rcu.so.24.1 00:01:31.486 [243/268] Linking target lib/librte_mempool.so.24.1 00:01:31.486 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:31.486 [245/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:31.486 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:31.486 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:31.486 [248/268] Linking target lib/librte_mbuf.so.24.1 00:01:31.743 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:31.743 [250/268] Linking target lib/librte_compressdev.so.24.1 00:01:31.743 [251/268] Linking target lib/librte_reorder.so.24.1 00:01:31.743 [252/268] Linking target lib/librte_net.so.24.1 00:01:31.743 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:01:32.001 [254/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:32.001 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:32.001 [256/268] Linking target lib/librte_cmdline.so.24.1 00:01:32.001 [257/268] Linking target lib/librte_security.so.24.1 00:01:32.001 [258/268] Linking target lib/librte_hash.so.24.1 00:01:32.001 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:32.936 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:34.371 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.371 [262/268] Linking target lib/librte_ethdev.so.24.1 00:01:34.371 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:34.630 [264/268] Linking target lib/librte_power.so.24.1 00:01:56.560 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:56.560 [266/268] Linking static target lib/librte_vhost.a 00:01:56.560 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.560 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:56.560 INFO: autodetecting backend as ninja 00:01:56.560 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:56.817 CC lib/ut/ut.o 00:01:56.817 CC lib/log/log.o 00:01:56.817 CC lib/ut_mock/mock.o 00:01:56.817 CC lib/log/log_flags.o 00:01:56.817 CC lib/log/log_deprecated.o 00:01:57.076 LIB libspdk_ut.a 00:01:57.076 LIB libspdk_log.a 00:01:57.076 LIB libspdk_ut_mock.a 00:01:57.076 SO libspdk_ut.so.2.0 00:01:57.076 SO libspdk_log.so.7.0 00:01:57.076 SO libspdk_ut_mock.so.6.0 00:01:57.076 SYMLINK libspdk_ut.so 00:01:57.076 SYMLINK libspdk_ut_mock.so 00:01:57.076 SYMLINK libspdk_log.so 00:01:57.334 CC lib/dma/dma.o 00:01:57.334 CC lib/ioat/ioat.o 00:01:57.334 CXX lib/trace_parser/trace.o 00:01:57.334 CC lib/util/base64.o 00:01:57.334 CC lib/util/bit_array.o 00:01:57.334 CC lib/util/cpuset.o 00:01:57.334 CC lib/util/crc16.o 00:01:57.334 CC lib/util/crc32.o 00:01:57.334 CC lib/util/crc32c.o 00:01:57.334 CC lib/util/crc32_ieee.o 00:01:57.334 CC lib/util/crc64.o 00:01:57.334 CC lib/util/dif.o 00:01:57.334 CC lib/util/fd.o 00:01:57.334 CC lib/util/file.o 00:01:57.334 CC lib/util/hexlify.o 00:01:57.334 CC lib/util/iov.o 00:01:57.334 CC lib/util/math.o 00:01:57.334 CC lib/util/pipe.o 00:01:57.334 CC lib/util/strerror_tls.o 00:01:57.334 CC lib/util/string.o 00:01:57.334 CC lib/util/uuid.o 00:01:57.334 CC lib/util/fd_group.o 00:01:57.334 CC lib/util/xor.o 00:01:57.334 CC lib/util/zipf.o 00:01:57.334 CC lib/vfio_user/host/vfio_user_pci.o 00:01:57.334 CC lib/vfio_user/host/vfio_user.o 00:01:57.592 LIB libspdk_dma.a 00:01:57.592 SO libspdk_dma.so.4.0 00:01:57.592 SYMLINK libspdk_dma.so 00:01:57.592 LIB libspdk_vfio_user.a 00:01:57.592 LIB libspdk_ioat.a 00:01:57.592 SO libspdk_vfio_user.so.5.0 00:01:57.851 SO libspdk_ioat.so.7.0 00:01:57.851 SYMLINK libspdk_vfio_user.so 00:01:57.851 SYMLINK libspdk_ioat.so 00:01:58.108 LIB libspdk_util.a 00:01:58.108 SO libspdk_util.so.9.1 00:01:58.109 SYMLINK libspdk_util.so 00:01:58.367 CC lib/json/json_parse.o 00:01:58.367 CC lib/vmd/vmd.o 00:01:58.367 CC lib/json/json_util.o 00:01:58.367 CC lib/conf/conf.o 00:01:58.367 CC lib/rdma_utils/rdma_utils.o 00:01:58.367 CC lib/idxd/idxd.o 00:01:58.367 CC lib/json/json_write.o 00:01:58.367 CC lib/vmd/led.o 00:01:58.367 CC lib/idxd/idxd_user.o 00:01:58.367 CC lib/rdma_provider/common.o 00:01:58.367 CC lib/env_dpdk/env.o 00:01:58.367 CC lib/idxd/idxd_kernel.o 00:01:58.367 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:58.367 CC lib/env_dpdk/memory.o 00:01:58.367 CC lib/env_dpdk/pci.o 00:01:58.367 CC lib/env_dpdk/init.o 00:01:58.367 CC lib/env_dpdk/threads.o 00:01:58.367 LIB libspdk_trace_parser.a 00:01:58.367 CC lib/env_dpdk/pci_ioat.o 00:01:58.367 CC lib/env_dpdk/pci_virtio.o 00:01:58.367 CC lib/env_dpdk/pci_vmd.o 00:01:58.367 CC lib/env_dpdk/pci_idxd.o 00:01:58.367 CC lib/env_dpdk/pci_event.o 00:01:58.367 CC lib/env_dpdk/sigbus_handler.o 00:01:58.367 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:58.367 CC lib/env_dpdk/pci_dpdk.o 00:01:58.367 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:58.367 SO libspdk_trace_parser.so.5.0 00:01:58.625 SYMLINK libspdk_trace_parser.so 00:01:58.625 LIB libspdk_rdma_provider.a 00:01:58.625 SO libspdk_rdma_provider.so.6.0 00:01:58.625 LIB libspdk_conf.a 00:01:58.625 SO libspdk_conf.so.6.0 00:01:58.625 SYMLINK libspdk_rdma_provider.so 00:01:58.625 LIB libspdk_rdma_utils.a 00:01:58.625 SYMLINK libspdk_conf.so 00:01:58.883 SO libspdk_rdma_utils.so.1.0 00:01:58.883 LIB libspdk_json.a 00:01:58.883 SO libspdk_json.so.6.0 00:01:58.883 SYMLINK libspdk_rdma_utils.so 00:01:58.883 SYMLINK libspdk_json.so 00:01:59.141 CC lib/jsonrpc/jsonrpc_server.o 00:01:59.141 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:59.141 CC lib/jsonrpc/jsonrpc_client.o 00:01:59.141 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:59.141 LIB libspdk_idxd.a 00:01:59.141 SO libspdk_idxd.so.12.0 00:01:59.399 SYMLINK libspdk_idxd.so 00:01:59.399 LIB libspdk_vmd.a 00:01:59.399 SO libspdk_vmd.so.6.0 00:01:59.399 LIB libspdk_jsonrpc.a 00:01:59.399 SO libspdk_jsonrpc.so.6.0 00:01:59.399 SYMLINK libspdk_vmd.so 00:01:59.399 SYMLINK libspdk_jsonrpc.so 00:01:59.657 CC lib/rpc/rpc.o 00:01:59.915 LIB libspdk_rpc.a 00:01:59.915 SO libspdk_rpc.so.6.0 00:01:59.915 SYMLINK libspdk_rpc.so 00:02:00.173 CC lib/trace/trace.o 00:02:00.173 CC lib/notify/notify.o 00:02:00.173 CC lib/trace/trace_flags.o 00:02:00.173 CC lib/notify/notify_rpc.o 00:02:00.173 CC lib/trace/trace_rpc.o 00:02:00.173 CC lib/keyring/keyring.o 00:02:00.173 CC lib/keyring/keyring_rpc.o 00:02:00.173 LIB libspdk_notify.a 00:02:00.173 SO libspdk_notify.so.6.0 00:02:00.430 SYMLINK libspdk_notify.so 00:02:00.430 LIB libspdk_keyring.a 00:02:00.430 SO libspdk_keyring.so.1.0 00:02:00.430 LIB libspdk_trace.a 00:02:00.430 SO libspdk_trace.so.10.0 00:02:00.430 SYMLINK libspdk_keyring.so 00:02:00.430 SYMLINK libspdk_trace.so 00:02:00.688 CC lib/thread/thread.o 00:02:00.688 CC lib/thread/iobuf.o 00:02:00.688 CC lib/sock/sock.o 00:02:00.688 CC lib/sock/sock_rpc.o 00:02:01.255 LIB libspdk_sock.a 00:02:01.255 SO libspdk_sock.so.10.0 00:02:01.255 SYMLINK libspdk_sock.so 00:02:01.255 LIB libspdk_env_dpdk.a 00:02:01.255 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:01.255 CC lib/nvme/nvme_ctrlr.o 00:02:01.255 CC lib/nvme/nvme_fabric.o 00:02:01.255 CC lib/nvme/nvme_ns_cmd.o 00:02:01.255 CC lib/nvme/nvme_ns.o 00:02:01.255 CC lib/nvme/nvme_pcie_common.o 00:02:01.255 CC lib/nvme/nvme_pcie.o 00:02:01.255 CC lib/nvme/nvme_qpair.o 00:02:01.255 CC lib/nvme/nvme.o 00:02:01.255 CC lib/nvme/nvme_quirks.o 00:02:01.255 CC lib/nvme/nvme_transport.o 00:02:01.255 CC lib/nvme/nvme_discovery.o 00:02:01.255 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:01.255 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:01.255 CC lib/nvme/nvme_tcp.o 00:02:01.255 CC lib/nvme/nvme_opal.o 00:02:01.255 CC lib/nvme/nvme_io_msg.o 00:02:01.255 CC lib/nvme/nvme_poll_group.o 00:02:01.255 CC lib/nvme/nvme_zns.o 00:02:01.255 CC lib/nvme/nvme_auth.o 00:02:01.255 CC lib/nvme/nvme_stubs.o 00:02:01.255 CC lib/nvme/nvme_cuse.o 00:02:01.255 CC lib/nvme/nvme_rdma.o 00:02:01.514 SO libspdk_env_dpdk.so.14.1 00:02:01.514 SYMLINK libspdk_env_dpdk.so 00:02:02.890 LIB libspdk_thread.a 00:02:02.890 SO libspdk_thread.so.10.1 00:02:02.890 SYMLINK libspdk_thread.so 00:02:02.890 CC lib/virtio/virtio.o 00:02:02.890 CC lib/init/json_config.o 00:02:02.890 CC lib/blob/blobstore.o 00:02:02.890 CC lib/virtio/virtio_vhost_user.o 00:02:02.890 CC lib/accel/accel.o 00:02:02.890 CC lib/init/subsystem.o 00:02:02.890 CC lib/blob/request.o 00:02:02.890 CC lib/virtio/virtio_vfio_user.o 00:02:02.890 CC lib/init/subsystem_rpc.o 00:02:02.890 CC lib/blob/zeroes.o 00:02:02.890 CC lib/accel/accel_rpc.o 00:02:02.890 CC lib/init/rpc.o 00:02:02.890 CC lib/blob/blob_bs_dev.o 00:02:02.890 CC lib/accel/accel_sw.o 00:02:02.890 CC lib/virtio/virtio_pci.o 00:02:03.149 LIB libspdk_init.a 00:02:03.149 SO libspdk_init.so.5.0 00:02:03.407 SYMLINK libspdk_init.so 00:02:03.407 LIB libspdk_virtio.a 00:02:03.407 SO libspdk_virtio.so.7.0 00:02:03.407 SYMLINK libspdk_virtio.so 00:02:03.407 CC lib/event/app.o 00:02:03.407 CC lib/event/reactor.o 00:02:03.407 CC lib/event/log_rpc.o 00:02:03.407 CC lib/event/app_rpc.o 00:02:03.407 CC lib/event/scheduler_static.o 00:02:03.974 LIB libspdk_event.a 00:02:03.974 SO libspdk_event.so.14.0 00:02:04.231 SYMLINK libspdk_event.so 00:02:04.231 LIB libspdk_accel.a 00:02:04.231 SO libspdk_accel.so.15.1 00:02:04.231 SYMLINK libspdk_accel.so 00:02:04.489 CC lib/bdev/bdev.o 00:02:04.489 CC lib/bdev/bdev_rpc.o 00:02:04.489 CC lib/bdev/bdev_zone.o 00:02:04.489 CC lib/bdev/part.o 00:02:04.489 CC lib/bdev/scsi_nvme.o 00:02:04.489 LIB libspdk_nvme.a 00:02:04.747 SO libspdk_nvme.so.13.1 00:02:05.004 SYMLINK libspdk_nvme.so 00:02:06.904 LIB libspdk_blob.a 00:02:06.904 SO libspdk_blob.so.11.0 00:02:07.162 SYMLINK libspdk_blob.so 00:02:07.162 CC lib/blobfs/blobfs.o 00:02:07.162 CC lib/blobfs/tree.o 00:02:07.162 CC lib/lvol/lvol.o 00:02:07.728 LIB libspdk_bdev.a 00:02:07.728 SO libspdk_bdev.so.15.1 00:02:07.989 SYMLINK libspdk_bdev.so 00:02:07.989 CC lib/scsi/dev.o 00:02:07.989 CC lib/ublk/ublk.o 00:02:07.989 CC lib/nbd/nbd.o 00:02:07.989 CC lib/nvmf/ctrlr.o 00:02:07.989 CC lib/nbd/nbd_rpc.o 00:02:07.989 CC lib/ublk/ublk_rpc.o 00:02:07.989 CC lib/scsi/lun.o 00:02:07.989 CC lib/nvmf/ctrlr_discovery.o 00:02:07.989 CC lib/ftl/ftl_core.o 00:02:07.989 CC lib/scsi/port.o 00:02:07.989 CC lib/ftl/ftl_init.o 00:02:07.989 CC lib/nvmf/ctrlr_bdev.o 00:02:07.989 CC lib/scsi/scsi.o 00:02:07.989 CC lib/ftl/ftl_layout.o 00:02:07.989 CC lib/nvmf/subsystem.o 00:02:07.989 CC lib/scsi/scsi_bdev.o 00:02:07.990 CC lib/ftl/ftl_debug.o 00:02:07.990 CC lib/nvmf/nvmf.o 00:02:07.990 CC lib/nvmf/nvmf_rpc.o 00:02:07.990 CC lib/scsi/scsi_rpc.o 00:02:07.990 CC lib/scsi/scsi_pr.o 00:02:07.990 CC lib/ftl/ftl_sb.o 00:02:07.990 CC lib/ftl/ftl_io.o 00:02:07.990 CC lib/scsi/task.o 00:02:07.990 CC lib/nvmf/transport.o 00:02:07.990 CC lib/ftl/ftl_l2p.o 00:02:07.990 CC lib/ftl/ftl_l2p_flat.o 00:02:07.990 CC lib/nvmf/tcp.o 00:02:07.990 CC lib/nvmf/stubs.o 00:02:07.990 CC lib/ftl/ftl_nv_cache.o 00:02:07.990 CC lib/nvmf/mdns_server.o 00:02:07.990 CC lib/ftl/ftl_band.o 00:02:07.990 CC lib/nvmf/rdma.o 00:02:07.990 CC lib/ftl/ftl_band_ops.o 00:02:07.990 CC lib/nvmf/auth.o 00:02:07.990 CC lib/ftl/ftl_writer.o 00:02:07.990 CC lib/ftl/ftl_rq.o 00:02:07.990 CC lib/ftl/ftl_reloc.o 00:02:07.990 CC lib/ftl/ftl_l2p_cache.o 00:02:07.990 CC lib/ftl/ftl_p2l.o 00:02:07.990 CC lib/ftl/mngt/ftl_mngt.o 00:02:07.990 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:07.990 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:07.990 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:07.990 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:07.990 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:08.254 LIB libspdk_blobfs.a 00:02:08.254 SO libspdk_blobfs.so.10.0 00:02:08.517 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:08.517 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:08.517 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:08.517 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:08.517 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:08.517 SYMLINK libspdk_blobfs.so 00:02:08.517 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:08.517 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:08.517 CC lib/ftl/utils/ftl_conf.o 00:02:08.517 CC lib/ftl/utils/ftl_md.o 00:02:08.517 CC lib/ftl/utils/ftl_mempool.o 00:02:08.517 CC lib/ftl/utils/ftl_bitmap.o 00:02:08.517 CC lib/ftl/utils/ftl_property.o 00:02:08.517 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:08.517 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:08.517 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:08.517 LIB libspdk_lvol.a 00:02:08.517 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:08.517 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:08.781 SO libspdk_lvol.so.10.0 00:02:08.781 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:08.781 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:08.781 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:08.781 SYMLINK libspdk_lvol.so 00:02:08.781 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:08.781 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:08.781 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:08.781 CC lib/ftl/base/ftl_base_dev.o 00:02:08.781 CC lib/ftl/base/ftl_base_bdev.o 00:02:08.781 CC lib/ftl/ftl_trace.o 00:02:09.040 LIB libspdk_nbd.a 00:02:09.040 SO libspdk_nbd.so.7.0 00:02:09.040 SYMLINK libspdk_nbd.so 00:02:09.298 LIB libspdk_scsi.a 00:02:09.298 SO libspdk_scsi.so.9.0 00:02:09.298 SYMLINK libspdk_scsi.so 00:02:09.298 LIB libspdk_ublk.a 00:02:09.298 SO libspdk_ublk.so.3.0 00:02:09.555 SYMLINK libspdk_ublk.so 00:02:09.555 CC lib/vhost/vhost.o 00:02:09.555 CC lib/iscsi/conn.o 00:02:09.555 CC lib/iscsi/init_grp.o 00:02:09.555 CC lib/vhost/vhost_rpc.o 00:02:09.555 CC lib/vhost/vhost_scsi.o 00:02:09.555 CC lib/iscsi/iscsi.o 00:02:09.555 CC lib/iscsi/md5.o 00:02:09.555 CC lib/vhost/vhost_blk.o 00:02:09.555 CC lib/iscsi/param.o 00:02:09.555 CC lib/vhost/rte_vhost_user.o 00:02:09.555 CC lib/iscsi/portal_grp.o 00:02:09.555 CC lib/iscsi/tgt_node.o 00:02:09.555 CC lib/iscsi/iscsi_subsystem.o 00:02:09.555 CC lib/iscsi/iscsi_rpc.o 00:02:09.555 CC lib/iscsi/task.o 00:02:10.120 LIB libspdk_ftl.a 00:02:10.120 SO libspdk_ftl.so.9.0 00:02:10.686 SYMLINK libspdk_ftl.so 00:02:10.945 LIB libspdk_vhost.a 00:02:10.945 SO libspdk_vhost.so.8.0 00:02:10.945 SYMLINK libspdk_vhost.so 00:02:11.512 LIB libspdk_iscsi.a 00:02:11.512 LIB libspdk_nvmf.a 00:02:11.512 SO libspdk_iscsi.so.8.0 00:02:11.512 SO libspdk_nvmf.so.18.1 00:02:11.512 SYMLINK libspdk_iscsi.so 00:02:11.770 SYMLINK libspdk_nvmf.so 00:02:12.028 CC module/env_dpdk/env_dpdk_rpc.o 00:02:12.028 CC module/accel/error/accel_error.o 00:02:12.028 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:12.028 CC module/accel/ioat/accel_ioat.o 00:02:12.028 CC module/accel/error/accel_error_rpc.o 00:02:12.028 CC module/accel/dsa/accel_dsa.o 00:02:12.028 CC module/accel/ioat/accel_ioat_rpc.o 00:02:12.028 CC module/sock/posix/posix.o 00:02:12.028 CC module/blob/bdev/blob_bdev.o 00:02:12.028 CC module/accel/dsa/accel_dsa_rpc.o 00:02:12.028 CC module/accel/iaa/accel_iaa.o 00:02:12.028 CC module/keyring/linux/keyring.o 00:02:12.028 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:12.028 CC module/keyring/file/keyring.o 00:02:12.028 CC module/accel/iaa/accel_iaa_rpc.o 00:02:12.028 CC module/keyring/linux/keyring_rpc.o 00:02:12.028 CC module/keyring/file/keyring_rpc.o 00:02:12.028 CC module/scheduler/gscheduler/gscheduler.o 00:02:12.028 LIB libspdk_env_dpdk_rpc.a 00:02:12.286 SO libspdk_env_dpdk_rpc.so.6.0 00:02:12.286 SYMLINK libspdk_env_dpdk_rpc.so 00:02:12.286 LIB libspdk_keyring_linux.a 00:02:12.286 LIB libspdk_keyring_file.a 00:02:12.286 LIB libspdk_scheduler_gscheduler.a 00:02:12.286 LIB libspdk_scheduler_dpdk_governor.a 00:02:12.286 SO libspdk_keyring_linux.so.1.0 00:02:12.286 SO libspdk_keyring_file.so.1.0 00:02:12.286 SO libspdk_scheduler_gscheduler.so.4.0 00:02:12.286 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:12.286 LIB libspdk_accel_error.a 00:02:12.286 LIB libspdk_accel_ioat.a 00:02:12.286 LIB libspdk_scheduler_dynamic.a 00:02:12.286 SO libspdk_accel_error.so.2.0 00:02:12.286 LIB libspdk_accel_iaa.a 00:02:12.286 SO libspdk_accel_ioat.so.6.0 00:02:12.286 SO libspdk_scheduler_dynamic.so.4.0 00:02:12.286 SYMLINK libspdk_keyring_file.so 00:02:12.286 SYMLINK libspdk_keyring_linux.so 00:02:12.286 SYMLINK libspdk_scheduler_gscheduler.so 00:02:12.286 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:12.287 SO libspdk_accel_iaa.so.3.0 00:02:12.287 SYMLINK libspdk_accel_error.so 00:02:12.287 SYMLINK libspdk_scheduler_dynamic.so 00:02:12.287 SYMLINK libspdk_accel_ioat.so 00:02:12.287 LIB libspdk_accel_dsa.a 00:02:12.287 LIB libspdk_blob_bdev.a 00:02:12.287 SYMLINK libspdk_accel_iaa.so 00:02:12.544 SO libspdk_blob_bdev.so.11.0 00:02:12.544 SO libspdk_accel_dsa.so.5.0 00:02:12.544 SYMLINK libspdk_blob_bdev.so 00:02:12.544 SYMLINK libspdk_accel_dsa.so 00:02:12.804 CC module/bdev/null/bdev_null.o 00:02:12.804 CC module/bdev/raid/bdev_raid.o 00:02:12.804 CC module/bdev/lvol/vbdev_lvol.o 00:02:12.804 CC module/bdev/passthru/vbdev_passthru.o 00:02:12.804 CC module/bdev/nvme/bdev_nvme.o 00:02:12.804 CC module/bdev/raid/bdev_raid_rpc.o 00:02:12.804 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:12.804 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:12.804 CC module/bdev/null/bdev_null_rpc.o 00:02:12.804 CC module/bdev/split/vbdev_split.o 00:02:12.804 CC module/bdev/raid/bdev_raid_sb.o 00:02:12.804 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:12.804 CC module/bdev/split/vbdev_split_rpc.o 00:02:12.804 CC module/bdev/raid/raid0.o 00:02:12.804 CC module/bdev/error/vbdev_error.o 00:02:12.804 CC module/bdev/aio/bdev_aio.o 00:02:12.804 CC module/bdev/nvme/nvme_rpc.o 00:02:12.804 CC module/bdev/delay/vbdev_delay.o 00:02:12.804 CC module/bdev/gpt/gpt.o 00:02:12.804 CC module/bdev/gpt/vbdev_gpt.o 00:02:12.804 CC module/bdev/error/vbdev_error_rpc.o 00:02:12.804 CC module/bdev/raid/raid1.o 00:02:12.804 CC module/bdev/nvme/bdev_mdns_client.o 00:02:12.804 CC module/bdev/aio/bdev_aio_rpc.o 00:02:12.804 CC module/bdev/malloc/bdev_malloc.o 00:02:12.804 CC module/bdev/raid/concat.o 00:02:12.804 CC module/bdev/nvme/vbdev_opal.o 00:02:12.804 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:12.804 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:12.804 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:12.804 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:12.804 CC module/bdev/ftl/bdev_ftl.o 00:02:12.804 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:12.804 CC module/blobfs/bdev/blobfs_bdev.o 00:02:12.804 CC module/bdev/iscsi/bdev_iscsi.o 00:02:12.804 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:12.804 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:12.804 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:12.804 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:12.804 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:12.804 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:12.804 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:13.062 LIB libspdk_bdev_error.a 00:02:13.062 LIB libspdk_blobfs_bdev.a 00:02:13.062 LIB libspdk_bdev_gpt.a 00:02:13.062 SO libspdk_bdev_error.so.6.0 00:02:13.062 SO libspdk_blobfs_bdev.so.6.0 00:02:13.062 SO libspdk_bdev_gpt.so.6.0 00:02:13.062 SYMLINK libspdk_bdev_error.so 00:02:13.321 LIB libspdk_bdev_split.a 00:02:13.321 SYMLINK libspdk_blobfs_bdev.so 00:02:13.321 SYMLINK libspdk_bdev_gpt.so 00:02:13.321 SO libspdk_bdev_split.so.6.0 00:02:13.321 LIB libspdk_sock_posix.a 00:02:13.321 LIB libspdk_bdev_ftl.a 00:02:13.321 SO libspdk_sock_posix.so.6.0 00:02:13.321 SO libspdk_bdev_ftl.so.6.0 00:02:13.321 SYMLINK libspdk_bdev_split.so 00:02:13.321 LIB libspdk_bdev_null.a 00:02:13.321 LIB libspdk_bdev_zone_block.a 00:02:13.321 LIB libspdk_bdev_passthru.a 00:02:13.321 SO libspdk_bdev_null.so.6.0 00:02:13.321 SO libspdk_bdev_zone_block.so.6.0 00:02:13.321 SO libspdk_bdev_passthru.so.6.0 00:02:13.321 SYMLINK libspdk_bdev_ftl.so 00:02:13.321 LIB libspdk_bdev_aio.a 00:02:13.321 SYMLINK libspdk_sock_posix.so 00:02:13.321 LIB libspdk_bdev_malloc.a 00:02:13.321 SO libspdk_bdev_aio.so.6.0 00:02:13.321 SYMLINK libspdk_bdev_null.so 00:02:13.321 SYMLINK libspdk_bdev_zone_block.so 00:02:13.321 SYMLINK libspdk_bdev_passthru.so 00:02:13.321 SO libspdk_bdev_malloc.so.6.0 00:02:13.321 LIB libspdk_bdev_iscsi.a 00:02:13.579 SYMLINK libspdk_bdev_aio.so 00:02:13.579 LIB libspdk_bdev_delay.a 00:02:13.579 SO libspdk_bdev_iscsi.so.6.0 00:02:13.579 SYMLINK libspdk_bdev_malloc.so 00:02:13.579 SO libspdk_bdev_delay.so.6.0 00:02:13.579 SYMLINK libspdk_bdev_iscsi.so 00:02:13.579 SYMLINK libspdk_bdev_delay.so 00:02:13.579 LIB libspdk_bdev_virtio.a 00:02:13.579 LIB libspdk_bdev_lvol.a 00:02:13.579 SO libspdk_bdev_virtio.so.6.0 00:02:13.579 SO libspdk_bdev_lvol.so.6.0 00:02:13.579 SYMLINK libspdk_bdev_virtio.so 00:02:13.579 SYMLINK libspdk_bdev_lvol.so 00:02:14.142 LIB libspdk_bdev_raid.a 00:02:14.142 SO libspdk_bdev_raid.so.6.0 00:02:14.400 SYMLINK libspdk_bdev_raid.so 00:02:15.773 LIB libspdk_bdev_nvme.a 00:02:15.773 SO libspdk_bdev_nvme.so.7.0 00:02:16.029 SYMLINK libspdk_bdev_nvme.so 00:02:16.287 CC module/event/subsystems/vmd/vmd.o 00:02:16.287 CC module/event/subsystems/iobuf/iobuf.o 00:02:16.287 CC module/event/subsystems/sock/sock.o 00:02:16.287 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:16.287 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:16.287 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:16.287 CC module/event/subsystems/scheduler/scheduler.o 00:02:16.287 CC module/event/subsystems/keyring/keyring.o 00:02:16.287 LIB libspdk_event_keyring.a 00:02:16.287 LIB libspdk_event_vhost_blk.a 00:02:16.287 LIB libspdk_event_scheduler.a 00:02:16.287 LIB libspdk_event_sock.a 00:02:16.287 LIB libspdk_event_vmd.a 00:02:16.544 LIB libspdk_event_iobuf.a 00:02:16.544 SO libspdk_event_keyring.so.1.0 00:02:16.544 SO libspdk_event_vhost_blk.so.3.0 00:02:16.544 SO libspdk_event_scheduler.so.4.0 00:02:16.544 SO libspdk_event_sock.so.5.0 00:02:16.544 SO libspdk_event_vmd.so.6.0 00:02:16.544 SO libspdk_event_iobuf.so.3.0 00:02:16.544 SYMLINK libspdk_event_keyring.so 00:02:16.544 SYMLINK libspdk_event_vhost_blk.so 00:02:16.544 SYMLINK libspdk_event_scheduler.so 00:02:16.544 SYMLINK libspdk_event_sock.so 00:02:16.544 SYMLINK libspdk_event_vmd.so 00:02:16.544 SYMLINK libspdk_event_iobuf.so 00:02:16.979 CC module/event/subsystems/accel/accel.o 00:02:16.979 LIB libspdk_event_accel.a 00:02:16.979 SO libspdk_event_accel.so.6.0 00:02:16.979 SYMLINK libspdk_event_accel.so 00:02:16.979 CC module/event/subsystems/bdev/bdev.o 00:02:17.236 LIB libspdk_event_bdev.a 00:02:17.236 SO libspdk_event_bdev.so.6.0 00:02:17.236 SYMLINK libspdk_event_bdev.so 00:02:17.494 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:17.494 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:17.494 CC module/event/subsystems/scsi/scsi.o 00:02:17.494 CC module/event/subsystems/ublk/ublk.o 00:02:17.494 CC module/event/subsystems/nbd/nbd.o 00:02:17.784 LIB libspdk_event_nbd.a 00:02:17.784 LIB libspdk_event_ublk.a 00:02:17.784 LIB libspdk_event_scsi.a 00:02:17.784 SO libspdk_event_nbd.so.6.0 00:02:17.784 SO libspdk_event_ublk.so.3.0 00:02:17.784 SO libspdk_event_scsi.so.6.0 00:02:17.784 SYMLINK libspdk_event_ublk.so 00:02:17.784 SYMLINK libspdk_event_nbd.so 00:02:17.784 SYMLINK libspdk_event_scsi.so 00:02:17.784 LIB libspdk_event_nvmf.a 00:02:17.784 SO libspdk_event_nvmf.so.6.0 00:02:17.784 SYMLINK libspdk_event_nvmf.so 00:02:18.056 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:18.056 CC module/event/subsystems/iscsi/iscsi.o 00:02:18.056 LIB libspdk_event_vhost_scsi.a 00:02:18.056 LIB libspdk_event_iscsi.a 00:02:18.056 SO libspdk_event_vhost_scsi.so.3.0 00:02:18.056 SO libspdk_event_iscsi.so.6.0 00:02:18.056 SYMLINK libspdk_event_vhost_scsi.so 00:02:18.056 SYMLINK libspdk_event_iscsi.so 00:02:18.317 SO libspdk.so.6.0 00:02:18.317 SYMLINK libspdk.so 00:02:18.581 CC app/trace_record/trace_record.o 00:02:18.581 CC app/spdk_top/spdk_top.o 00:02:18.581 CC test/rpc_client/rpc_client_test.o 00:02:18.581 CC app/spdk_lspci/spdk_lspci.o 00:02:18.581 CC app/spdk_nvme_perf/perf.o 00:02:18.581 CC app/spdk_nvme_identify/identify.o 00:02:18.581 CXX app/trace/trace.o 00:02:18.581 CC app/spdk_nvme_discover/discovery_aer.o 00:02:18.581 TEST_HEADER include/spdk/accel.h 00:02:18.581 TEST_HEADER include/spdk/accel_module.h 00:02:18.581 TEST_HEADER include/spdk/assert.h 00:02:18.581 TEST_HEADER include/spdk/barrier.h 00:02:18.581 TEST_HEADER include/spdk/base64.h 00:02:18.581 TEST_HEADER include/spdk/bdev.h 00:02:18.581 TEST_HEADER include/spdk/bdev_module.h 00:02:18.581 TEST_HEADER include/spdk/bdev_zone.h 00:02:18.581 TEST_HEADER include/spdk/bit_array.h 00:02:18.581 TEST_HEADER include/spdk/bit_pool.h 00:02:18.581 TEST_HEADER include/spdk/blob_bdev.h 00:02:18.581 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:18.581 TEST_HEADER include/spdk/blobfs.h 00:02:18.581 TEST_HEADER include/spdk/blob.h 00:02:18.581 TEST_HEADER include/spdk/conf.h 00:02:18.581 TEST_HEADER include/spdk/config.h 00:02:18.581 TEST_HEADER include/spdk/cpuset.h 00:02:18.581 TEST_HEADER include/spdk/crc16.h 00:02:18.581 TEST_HEADER include/spdk/crc32.h 00:02:18.581 TEST_HEADER include/spdk/crc64.h 00:02:18.581 TEST_HEADER include/spdk/dif.h 00:02:18.581 TEST_HEADER include/spdk/dma.h 00:02:18.581 TEST_HEADER include/spdk/env_dpdk.h 00:02:18.581 TEST_HEADER include/spdk/endian.h 00:02:18.581 TEST_HEADER include/spdk/env.h 00:02:18.581 TEST_HEADER include/spdk/event.h 00:02:18.581 TEST_HEADER include/spdk/fd_group.h 00:02:18.581 TEST_HEADER include/spdk/fd.h 00:02:18.581 TEST_HEADER include/spdk/file.h 00:02:18.581 TEST_HEADER include/spdk/gpt_spec.h 00:02:18.581 TEST_HEADER include/spdk/ftl.h 00:02:18.581 TEST_HEADER include/spdk/hexlify.h 00:02:18.581 TEST_HEADER include/spdk/histogram_data.h 00:02:18.581 TEST_HEADER include/spdk/idxd.h 00:02:18.581 TEST_HEADER include/spdk/idxd_spec.h 00:02:18.581 TEST_HEADER include/spdk/init.h 00:02:18.581 TEST_HEADER include/spdk/ioat.h 00:02:18.581 TEST_HEADER include/spdk/iscsi_spec.h 00:02:18.581 TEST_HEADER include/spdk/ioat_spec.h 00:02:18.581 TEST_HEADER include/spdk/json.h 00:02:18.581 TEST_HEADER include/spdk/jsonrpc.h 00:02:18.581 TEST_HEADER include/spdk/keyring.h 00:02:18.581 TEST_HEADER include/spdk/keyring_module.h 00:02:18.581 TEST_HEADER include/spdk/likely.h 00:02:18.581 TEST_HEADER include/spdk/log.h 00:02:18.581 TEST_HEADER include/spdk/lvol.h 00:02:18.581 TEST_HEADER include/spdk/memory.h 00:02:18.581 TEST_HEADER include/spdk/mmio.h 00:02:18.581 TEST_HEADER include/spdk/nbd.h 00:02:18.581 TEST_HEADER include/spdk/nvme.h 00:02:18.581 TEST_HEADER include/spdk/notify.h 00:02:18.581 TEST_HEADER include/spdk/nvme_intel.h 00:02:18.581 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:18.581 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:18.581 TEST_HEADER include/spdk/nvme_spec.h 00:02:18.581 TEST_HEADER include/spdk/nvme_zns.h 00:02:18.581 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:18.581 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:18.581 TEST_HEADER include/spdk/nvmf.h 00:02:18.581 TEST_HEADER include/spdk/nvmf_spec.h 00:02:18.581 TEST_HEADER include/spdk/nvmf_transport.h 00:02:18.581 TEST_HEADER include/spdk/opal.h 00:02:18.581 TEST_HEADER include/spdk/opal_spec.h 00:02:18.581 TEST_HEADER include/spdk/pci_ids.h 00:02:18.581 TEST_HEADER include/spdk/pipe.h 00:02:18.581 TEST_HEADER include/spdk/queue.h 00:02:18.581 TEST_HEADER include/spdk/reduce.h 00:02:18.581 TEST_HEADER include/spdk/rpc.h 00:02:18.581 TEST_HEADER include/spdk/scheduler.h 00:02:18.581 TEST_HEADER include/spdk/scsi.h 00:02:18.581 TEST_HEADER include/spdk/scsi_spec.h 00:02:18.581 TEST_HEADER include/spdk/sock.h 00:02:18.581 TEST_HEADER include/spdk/stdinc.h 00:02:18.581 TEST_HEADER include/spdk/string.h 00:02:18.581 TEST_HEADER include/spdk/thread.h 00:02:18.581 TEST_HEADER include/spdk/trace.h 00:02:18.581 TEST_HEADER include/spdk/trace_parser.h 00:02:18.581 TEST_HEADER include/spdk/tree.h 00:02:18.581 TEST_HEADER include/spdk/ublk.h 00:02:18.581 TEST_HEADER include/spdk/util.h 00:02:18.581 TEST_HEADER include/spdk/version.h 00:02:18.581 TEST_HEADER include/spdk/uuid.h 00:02:18.581 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:18.581 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:18.581 TEST_HEADER include/spdk/vhost.h 00:02:18.581 TEST_HEADER include/spdk/vmd.h 00:02:18.581 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:18.581 CC app/spdk_dd/spdk_dd.o 00:02:18.581 TEST_HEADER include/spdk/xor.h 00:02:18.581 TEST_HEADER include/spdk/zipf.h 00:02:18.581 CXX test/cpp_headers/accel.o 00:02:18.581 CC app/nvmf_tgt/nvmf_main.o 00:02:18.581 CXX test/cpp_headers/accel_module.o 00:02:18.581 CXX test/cpp_headers/assert.o 00:02:18.581 CXX test/cpp_headers/barrier.o 00:02:18.581 CXX test/cpp_headers/base64.o 00:02:18.581 CXX test/cpp_headers/bdev.o 00:02:18.581 CXX test/cpp_headers/bdev_module.o 00:02:18.581 CXX test/cpp_headers/bdev_zone.o 00:02:18.581 CXX test/cpp_headers/bit_array.o 00:02:18.581 CXX test/cpp_headers/bit_pool.o 00:02:18.581 CXX test/cpp_headers/blob_bdev.o 00:02:18.581 CXX test/cpp_headers/blobfs_bdev.o 00:02:18.581 CXX test/cpp_headers/blobfs.o 00:02:18.581 CXX test/cpp_headers/blob.o 00:02:18.581 CXX test/cpp_headers/conf.o 00:02:18.581 CXX test/cpp_headers/config.o 00:02:18.581 CXX test/cpp_headers/cpuset.o 00:02:18.581 CXX test/cpp_headers/crc16.o 00:02:18.581 CC app/iscsi_tgt/iscsi_tgt.o 00:02:18.581 CC app/spdk_tgt/spdk_tgt.o 00:02:18.581 CXX test/cpp_headers/crc32.o 00:02:18.581 CC test/app/histogram_perf/histogram_perf.o 00:02:18.581 CC test/env/vtophys/vtophys.o 00:02:18.581 CC examples/ioat/perf/perf.o 00:02:18.581 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:18.581 CC test/app/stub/stub.o 00:02:18.581 CC test/env/pci/pci_ut.o 00:02:18.581 CC examples/util/zipf/zipf.o 00:02:18.581 CC test/thread/poller_perf/poller_perf.o 00:02:18.581 CC test/env/memory/memory_ut.o 00:02:18.581 CC examples/ioat/verify/verify.o 00:02:18.581 CC test/app/jsoncat/jsoncat.o 00:02:18.581 CC app/fio/nvme/fio_plugin.o 00:02:18.581 CC test/dma/test_dma/test_dma.o 00:02:18.581 CC test/app/bdev_svc/bdev_svc.o 00:02:18.581 CC app/fio/bdev/fio_plugin.o 00:02:18.840 CC test/env/mem_callbacks/mem_callbacks.o 00:02:18.840 LINK spdk_lspci 00:02:18.840 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:18.840 LINK rpc_client_test 00:02:18.840 LINK nvmf_tgt 00:02:18.840 LINK poller_perf 00:02:18.840 LINK vtophys 00:02:18.840 LINK jsoncat 00:02:18.840 CXX test/cpp_headers/crc64.o 00:02:18.840 LINK spdk_nvme_discover 00:02:18.840 LINK histogram_perf 00:02:18.840 CXX test/cpp_headers/dif.o 00:02:18.840 CXX test/cpp_headers/dma.o 00:02:19.104 LINK interrupt_tgt 00:02:19.104 LINK env_dpdk_post_init 00:02:19.104 LINK zipf 00:02:19.104 CXX test/cpp_headers/endian.o 00:02:19.104 CXX test/cpp_headers/env_dpdk.o 00:02:19.104 CXX test/cpp_headers/env.o 00:02:19.104 CXX test/cpp_headers/event.o 00:02:19.104 CXX test/cpp_headers/fd_group.o 00:02:19.104 CXX test/cpp_headers/fd.o 00:02:19.104 CXX test/cpp_headers/file.o 00:02:19.104 CXX test/cpp_headers/ftl.o 00:02:19.104 CXX test/cpp_headers/gpt_spec.o 00:02:19.104 CXX test/cpp_headers/hexlify.o 00:02:19.104 LINK iscsi_tgt 00:02:19.104 CXX test/cpp_headers/histogram_data.o 00:02:19.104 LINK stub 00:02:19.104 LINK spdk_tgt 00:02:19.104 LINK bdev_svc 00:02:19.104 LINK spdk_trace_record 00:02:19.104 CXX test/cpp_headers/idxd.o 00:02:19.104 CXX test/cpp_headers/idxd_spec.o 00:02:19.104 LINK ioat_perf 00:02:19.104 CXX test/cpp_headers/init.o 00:02:19.104 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:19.104 LINK verify 00:02:19.104 CXX test/cpp_headers/ioat.o 00:02:19.104 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:19.364 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:19.364 CXX test/cpp_headers/ioat_spec.o 00:02:19.364 CXX test/cpp_headers/iscsi_spec.o 00:02:19.364 CXX test/cpp_headers/json.o 00:02:19.364 CXX test/cpp_headers/jsonrpc.o 00:02:19.364 CXX test/cpp_headers/keyring.o 00:02:19.364 CXX test/cpp_headers/keyring_module.o 00:02:19.364 CXX test/cpp_headers/likely.o 00:02:19.364 CXX test/cpp_headers/log.o 00:02:19.364 CXX test/cpp_headers/lvol.o 00:02:19.364 LINK spdk_dd 00:02:19.364 CXX test/cpp_headers/memory.o 00:02:19.364 CXX test/cpp_headers/mmio.o 00:02:19.364 CXX test/cpp_headers/nbd.o 00:02:19.364 LINK spdk_trace 00:02:19.364 CXX test/cpp_headers/notify.o 00:02:19.364 CXX test/cpp_headers/nvme.o 00:02:19.364 CXX test/cpp_headers/nvme_intel.o 00:02:19.364 CXX test/cpp_headers/nvme_ocssd.o 00:02:19.364 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:19.364 CXX test/cpp_headers/nvme_zns.o 00:02:19.364 CXX test/cpp_headers/nvme_spec.o 00:02:19.364 CXX test/cpp_headers/nvmf_cmd.o 00:02:19.364 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:19.628 CXX test/cpp_headers/nvmf.o 00:02:19.628 CXX test/cpp_headers/nvmf_spec.o 00:02:19.628 CXX test/cpp_headers/nvmf_transport.o 00:02:19.628 LINK test_dma 00:02:19.628 CXX test/cpp_headers/opal.o 00:02:19.628 CXX test/cpp_headers/opal_spec.o 00:02:19.628 LINK pci_ut 00:02:19.628 CXX test/cpp_headers/pci_ids.o 00:02:19.628 CC test/event/event_perf/event_perf.o 00:02:19.628 CXX test/cpp_headers/pipe.o 00:02:19.628 CC test/event/reactor/reactor.o 00:02:19.628 CC test/event/reactor_perf/reactor_perf.o 00:02:19.628 CC test/event/app_repeat/app_repeat.o 00:02:19.628 CC examples/sock/hello_world/hello_sock.o 00:02:19.628 CC examples/vmd/lsvmd/lsvmd.o 00:02:19.628 CC examples/thread/thread/thread_ex.o 00:02:19.628 CXX test/cpp_headers/queue.o 00:02:19.628 CXX test/cpp_headers/reduce.o 00:02:19.628 CXX test/cpp_headers/rpc.o 00:02:19.628 CC examples/idxd/perf/perf.o 00:02:19.628 CXX test/cpp_headers/scheduler.o 00:02:19.628 CC test/event/scheduler/scheduler.o 00:02:19.628 LINK nvme_fuzz 00:02:19.890 CXX test/cpp_headers/scsi.o 00:02:19.890 CXX test/cpp_headers/scsi_spec.o 00:02:19.890 CXX test/cpp_headers/sock.o 00:02:19.890 CC examples/vmd/led/led.o 00:02:19.890 CXX test/cpp_headers/stdinc.o 00:02:19.890 CXX test/cpp_headers/string.o 00:02:19.890 CXX test/cpp_headers/thread.o 00:02:19.890 CXX test/cpp_headers/trace.o 00:02:19.890 CXX test/cpp_headers/trace_parser.o 00:02:19.890 LINK spdk_bdev 00:02:19.890 CXX test/cpp_headers/tree.o 00:02:19.890 CXX test/cpp_headers/ublk.o 00:02:19.890 CXX test/cpp_headers/util.o 00:02:19.890 CXX test/cpp_headers/uuid.o 00:02:19.890 CXX test/cpp_headers/version.o 00:02:19.890 CXX test/cpp_headers/vfio_user_pci.o 00:02:19.890 CXX test/cpp_headers/vfio_user_spec.o 00:02:19.890 LINK reactor_perf 00:02:19.890 CXX test/cpp_headers/vhost.o 00:02:19.890 LINK reactor 00:02:19.890 LINK event_perf 00:02:19.890 CXX test/cpp_headers/vmd.o 00:02:19.890 CXX test/cpp_headers/xor.o 00:02:19.890 CXX test/cpp_headers/zipf.o 00:02:19.890 LINK mem_callbacks 00:02:19.890 LINK app_repeat 00:02:19.890 LINK lsvmd 00:02:19.890 LINK spdk_nvme 00:02:19.890 CC app/vhost/vhost.o 00:02:20.150 LINK led 00:02:20.150 LINK thread 00:02:20.150 LINK scheduler 00:02:20.150 LINK vhost_fuzz 00:02:20.150 LINK hello_sock 00:02:20.409 CC test/nvme/aer/aer.o 00:02:20.409 CC test/nvme/reset/reset.o 00:02:20.409 CC test/nvme/err_injection/err_injection.o 00:02:20.409 CC test/nvme/startup/startup.o 00:02:20.409 CC test/nvme/e2edp/nvme_dp.o 00:02:20.409 CC test/nvme/overhead/overhead.o 00:02:20.409 CC test/nvme/compliance/nvme_compliance.o 00:02:20.409 CC test/nvme/sgl/sgl.o 00:02:20.409 CC test/nvme/simple_copy/simple_copy.o 00:02:20.409 CC test/nvme/boot_partition/boot_partition.o 00:02:20.409 CC test/nvme/reserve/reserve.o 00:02:20.409 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:20.409 CC test/nvme/fused_ordering/fused_ordering.o 00:02:20.409 CC test/nvme/connect_stress/connect_stress.o 00:02:20.409 CC test/nvme/fdp/fdp.o 00:02:20.409 CC test/nvme/cuse/cuse.o 00:02:20.409 CC test/accel/dif/dif.o 00:02:20.409 LINK vhost 00:02:20.409 CC test/blobfs/mkfs/mkfs.o 00:02:20.409 CC test/lvol/esnap/esnap.o 00:02:20.409 LINK spdk_nvme_perf 00:02:20.409 LINK spdk_nvme_identify 00:02:20.409 LINK spdk_top 00:02:20.409 LINK idxd_perf 00:02:20.409 LINK startup 00:02:20.667 LINK connect_stress 00:02:20.667 LINK mkfs 00:02:20.667 LINK boot_partition 00:02:20.667 LINK err_injection 00:02:20.667 LINK doorbell_aers 00:02:20.667 LINK nvme_dp 00:02:20.667 LINK reserve 00:02:20.667 LINK sgl 00:02:20.667 LINK fused_ordering 00:02:20.667 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:20.667 CC examples/nvme/hello_world/hello_world.o 00:02:20.667 CC examples/nvme/reconnect/reconnect.o 00:02:20.667 LINK simple_copy 00:02:20.667 CC examples/nvme/hotplug/hotplug.o 00:02:20.667 CC examples/nvme/abort/abort.o 00:02:20.667 CC examples/nvme/arbitration/arbitration.o 00:02:20.667 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:20.667 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:20.667 CC examples/accel/perf/accel_perf.o 00:02:20.925 CC examples/blob/cli/blobcli.o 00:02:20.925 LINK reset 00:02:20.925 CC examples/blob/hello_world/hello_blob.o 00:02:20.925 LINK nvme_compliance 00:02:20.925 LINK overhead 00:02:20.925 LINK aer 00:02:20.925 LINK fdp 00:02:20.925 LINK memory_ut 00:02:20.925 LINK cmb_copy 00:02:20.925 LINK pmr_persistence 00:02:21.182 LINK dif 00:02:21.183 LINK hotplug 00:02:21.183 LINK hello_world 00:02:21.183 LINK hello_blob 00:02:21.183 LINK arbitration 00:02:21.183 LINK abort 00:02:21.440 LINK reconnect 00:02:21.440 LINK blobcli 00:02:21.440 CC test/bdev/bdevio/bdevio.o 00:02:21.440 LINK accel_perf 00:02:21.440 LINK nvme_manage 00:02:22.005 CC examples/bdev/hello_world/hello_bdev.o 00:02:22.005 CC examples/bdev/bdevperf/bdevperf.o 00:02:22.005 LINK bdevio 00:02:22.005 LINK iscsi_fuzz 00:02:22.005 LINK cuse 00:02:22.005 LINK hello_bdev 00:02:22.935 LINK bdevperf 00:02:23.193 CC examples/nvmf/nvmf/nvmf.o 00:02:23.758 LINK nvmf 00:02:27.042 LINK esnap 00:02:27.301 00:02:27.301 real 1m14.615s 00:02:27.301 user 11m15.648s 00:02:27.301 sys 2m24.861s 00:02:27.301 14:04:36 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:27.301 14:04:36 make -- common/autotest_common.sh@10 -- $ set +x 00:02:27.301 ************************************ 00:02:27.301 END TEST make 00:02:27.301 ************************************ 00:02:27.301 14:04:36 -- common/autotest_common.sh@1142 -- $ return 0 00:02:27.301 14:04:36 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:27.301 14:04:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:27.301 14:04:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:27.301 14:04:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.301 14:04:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:27.301 14:04:36 -- pm/common@44 -- $ pid=1148105 00:02:27.301 14:04:36 -- pm/common@50 -- $ kill -TERM 1148105 00:02:27.301 14:04:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.301 14:04:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:27.301 14:04:36 -- pm/common@44 -- $ pid=1148107 00:02:27.301 14:04:36 -- pm/common@50 -- $ kill -TERM 1148107 00:02:27.301 14:04:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.301 14:04:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:27.301 14:04:36 -- pm/common@44 -- $ pid=1148109 00:02:27.301 14:04:36 -- pm/common@50 -- $ kill -TERM 1148109 00:02:27.301 14:04:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.301 14:04:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:27.301 14:04:36 -- pm/common@44 -- $ pid=1148138 00:02:27.301 14:04:36 -- pm/common@50 -- $ sudo -E kill -TERM 1148138 00:02:27.301 14:04:36 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:27.301 14:04:36 -- nvmf/common.sh@7 -- # uname -s 00:02:27.302 14:04:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:27.302 14:04:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:27.302 14:04:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:27.302 14:04:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:27.302 14:04:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:27.302 14:04:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:27.302 14:04:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:27.302 14:04:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:27.302 14:04:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:27.302 14:04:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:27.302 14:04:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:27.302 14:04:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:27.302 14:04:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:27.302 14:04:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:27.302 14:04:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:27.302 14:04:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:27.302 14:04:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:27.302 14:04:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:27.302 14:04:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:27.302 14:04:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:27.302 14:04:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.302 14:04:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.302 14:04:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.302 14:04:36 -- paths/export.sh@5 -- # export PATH 00:02:27.302 14:04:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.302 14:04:36 -- nvmf/common.sh@47 -- # : 0 00:02:27.302 14:04:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:27.302 14:04:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:27.302 14:04:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:27.302 14:04:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:27.302 14:04:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:27.302 14:04:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:27.302 14:04:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:27.302 14:04:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:27.302 14:04:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:27.302 14:04:36 -- spdk/autotest.sh@32 -- # uname -s 00:02:27.302 14:04:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:27.302 14:04:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:27.302 14:04:36 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:27.302 14:04:36 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:27.302 14:04:36 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:27.302 14:04:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:27.302 14:04:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:27.302 14:04:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:27.302 14:04:36 -- spdk/autotest.sh@48 -- # udevadm_pid=1206820 00:02:27.302 14:04:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:27.302 14:04:36 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:27.302 14:04:36 -- pm/common@17 -- # local monitor 00:02:27.302 14:04:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.302 14:04:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.302 14:04:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.302 14:04:36 -- pm/common@21 -- # date +%s 00:02:27.302 14:04:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.302 14:04:36 -- pm/common@21 -- # date +%s 00:02:27.302 14:04:36 -- pm/common@25 -- # sleep 1 00:02:27.302 14:04:36 -- pm/common@21 -- # date +%s 00:02:27.302 14:04:36 -- pm/common@21 -- # date +%s 00:02:27.302 14:04:36 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720613076 00:02:27.302 14:04:36 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720613076 00:02:27.302 14:04:36 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720613076 00:02:27.302 14:04:36 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720613076 00:02:27.302 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720613076_collect-vmstat.pm.log 00:02:27.302 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720613076_collect-cpu-load.pm.log 00:02:27.302 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720613076_collect-cpu-temp.pm.log 00:02:27.302 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720613076_collect-bmc-pm.bmc.pm.log 00:02:28.677 14:04:37 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:28.677 14:04:37 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:28.677 14:04:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:28.677 14:04:37 -- common/autotest_common.sh@10 -- # set +x 00:02:28.677 14:04:37 -- spdk/autotest.sh@59 -- # create_test_list 00:02:28.677 14:04:37 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:28.677 14:04:37 -- common/autotest_common.sh@10 -- # set +x 00:02:28.678 14:04:37 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:28.678 14:04:37 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.678 14:04:37 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.678 14:04:37 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:28.678 14:04:37 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.678 14:04:37 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:28.678 14:04:37 -- common/autotest_common.sh@1455 -- # uname 00:02:28.678 14:04:37 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:28.678 14:04:37 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:28.678 14:04:37 -- common/autotest_common.sh@1475 -- # uname 00:02:28.678 14:04:37 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:28.678 14:04:37 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:28.678 14:04:37 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:28.678 14:04:37 -- spdk/autotest.sh@72 -- # hash lcov 00:02:28.678 14:04:37 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:28.678 14:04:37 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:28.678 --rc lcov_branch_coverage=1 00:02:28.678 --rc lcov_function_coverage=1 00:02:28.678 --rc genhtml_branch_coverage=1 00:02:28.678 --rc genhtml_function_coverage=1 00:02:28.678 --rc genhtml_legend=1 00:02:28.678 --rc geninfo_all_blocks=1 00:02:28.678 ' 00:02:28.678 14:04:37 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:28.678 --rc lcov_branch_coverage=1 00:02:28.678 --rc lcov_function_coverage=1 00:02:28.678 --rc genhtml_branch_coverage=1 00:02:28.678 --rc genhtml_function_coverage=1 00:02:28.678 --rc genhtml_legend=1 00:02:28.678 --rc geninfo_all_blocks=1 00:02:28.678 ' 00:02:28.678 14:04:37 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:28.678 --rc lcov_branch_coverage=1 00:02:28.678 --rc lcov_function_coverage=1 00:02:28.678 --rc genhtml_branch_coverage=1 00:02:28.678 --rc genhtml_function_coverage=1 00:02:28.678 --rc genhtml_legend=1 00:02:28.678 --rc geninfo_all_blocks=1 00:02:28.678 --no-external' 00:02:28.678 14:04:37 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:28.678 --rc lcov_branch_coverage=1 00:02:28.678 --rc lcov_function_coverage=1 00:02:28.678 --rc genhtml_branch_coverage=1 00:02:28.678 --rc genhtml_function_coverage=1 00:02:28.678 --rc genhtml_legend=1 00:02:28.678 --rc geninfo_all_blocks=1 00:02:28.678 --no-external' 00:02:28.678 14:04:37 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:28.678 lcov: LCOV version 1.14 00:02:28.678 14:04:37 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:46.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:46.762 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:56.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:56.731 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:56.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:56.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:56.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:56.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:56.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:56.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:56.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:56.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:56.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:56.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:56.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:56.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:56.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:56.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:56.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:56.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:56.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:56.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:56.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:56.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:56.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:56.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:56.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:56.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:56.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:57.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:57.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:57.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:57.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:57.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:57.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:57.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:57.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:57.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:57.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:57.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:57.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:57.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:57.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:57.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:57.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:57.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:57.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:57.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:57.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:57.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:01.428 14:05:10 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:01.428 14:05:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:01.428 14:05:10 -- common/autotest_common.sh@10 -- # set +x 00:03:01.428 14:05:10 -- spdk/autotest.sh@91 -- # rm -f 00:03:01.428 14:05:10 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:01.994 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:01.994 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:01.994 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:01.994 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:01.994 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:01.994 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:01.994 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:01.994 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:02.252 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:02.252 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:02.252 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:02.252 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:02.252 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:02.252 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:02.252 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:02.252 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:02.252 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:02.252 14:05:11 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:02.252 14:05:11 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:02.252 14:05:11 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:02.252 14:05:11 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:02.252 14:05:11 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:02.252 14:05:11 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:02.252 14:05:11 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:02.252 14:05:11 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:02.252 14:05:11 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:02.252 14:05:11 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:02.252 14:05:11 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:02.252 14:05:11 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:02.252 14:05:11 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:02.252 14:05:11 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:02.252 14:05:11 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:02.510 No valid GPT data, bailing 00:03:02.510 14:05:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:02.510 14:05:11 -- scripts/common.sh@391 -- # pt= 00:03:02.510 14:05:11 -- scripts/common.sh@392 -- # return 1 00:03:02.510 14:05:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:02.510 1+0 records in 00:03:02.510 1+0 records out 00:03:02.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00210042 s, 499 MB/s 00:03:02.510 14:05:11 -- spdk/autotest.sh@118 -- # sync 00:03:02.510 14:05:11 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:02.510 14:05:11 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:02.510 14:05:11 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:04.409 14:05:13 -- spdk/autotest.sh@124 -- # uname -s 00:03:04.409 14:05:13 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:04.409 14:05:13 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:04.409 14:05:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:04.409 14:05:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.409 14:05:13 -- common/autotest_common.sh@10 -- # set +x 00:03:04.409 ************************************ 00:03:04.409 START TEST setup.sh 00:03:04.409 ************************************ 00:03:04.409 14:05:13 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:04.409 * Looking for test storage... 00:03:04.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:04.409 14:05:13 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:04.409 14:05:13 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:04.409 14:05:13 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:04.409 14:05:13 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:04.409 14:05:13 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.409 14:05:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:04.409 ************************************ 00:03:04.409 START TEST acl 00:03:04.409 ************************************ 00:03:04.409 14:05:13 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:04.409 * Looking for test storage... 00:03:04.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:04.409 14:05:13 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:04.409 14:05:13 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:04.409 14:05:13 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:04.409 14:05:13 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:04.409 14:05:13 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:04.409 14:05:13 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:04.409 14:05:13 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:04.409 14:05:13 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:04.409 14:05:13 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:04.409 14:05:13 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:04.409 14:05:13 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:04.409 14:05:13 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:04.409 14:05:13 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:04.409 14:05:13 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:04.410 14:05:13 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:04.410 14:05:13 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.311 14:05:15 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:06.311 14:05:15 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:06.311 14:05:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.311 14:05:15 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:06.311 14:05:15 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.311 14:05:15 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:06.879 Hugepages 00:03:06.879 node hugesize free / total 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.879 00:03:06.879 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.879 14:05:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.138 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:07.138 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.138 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.138 14:05:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.138 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:07.138 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.138 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.138 14:05:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.138 14:05:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:07.138 14:05:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:07.138 14:05:16 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:07.138 14:05:16 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:07.138 14:05:16 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:07.138 14:05:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.138 14:05:16 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:07.138 14:05:16 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:07.138 14:05:16 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:07.138 14:05:16 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:07.138 14:05:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:07.138 ************************************ 00:03:07.138 START TEST denied 00:03:07.138 ************************************ 00:03:07.138 14:05:16 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:07.138 14:05:16 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:07.138 14:05:16 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:07.138 14:05:16 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:07.138 14:05:16 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.138 14:05:16 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:08.514 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:08.514 14:05:17 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:08.514 14:05:17 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:08.514 14:05:17 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:08.514 14:05:17 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:08.514 14:05:17 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:08.514 14:05:17 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:08.514 14:05:17 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:08.514 14:05:17 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:08.514 14:05:17 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:08.514 14:05:17 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.047 00:03:11.047 real 0m3.782s 00:03:11.047 user 0m1.093s 00:03:11.047 sys 0m1.756s 00:03:11.047 14:05:20 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:11.047 14:05:20 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:11.047 ************************************ 00:03:11.047 END TEST denied 00:03:11.047 ************************************ 00:03:11.047 14:05:20 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:11.047 14:05:20 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:11.047 14:05:20 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:11.047 14:05:20 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.047 14:05:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:11.047 ************************************ 00:03:11.047 START TEST allowed 00:03:11.047 ************************************ 00:03:11.047 14:05:20 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:11.047 14:05:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:11.047 14:05:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:11.047 14:05:20 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:11.047 14:05:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.047 14:05:20 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:13.636 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:13.636 14:05:22 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:13.636 14:05:22 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:13.636 14:05:22 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:13.636 14:05:22 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:13.636 14:05:22 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:15.010 00:03:15.010 real 0m3.789s 00:03:15.010 user 0m0.938s 00:03:15.010 sys 0m1.679s 00:03:15.010 14:05:24 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:15.010 14:05:24 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:15.010 ************************************ 00:03:15.010 END TEST allowed 00:03:15.010 ************************************ 00:03:15.010 14:05:24 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:15.010 00:03:15.010 real 0m10.301s 00:03:15.010 user 0m3.076s 00:03:15.010 sys 0m5.191s 00:03:15.010 14:05:24 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:15.010 14:05:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:15.010 ************************************ 00:03:15.010 END TEST acl 00:03:15.010 ************************************ 00:03:15.010 14:05:24 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:15.010 14:05:24 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:15.010 14:05:24 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:15.010 14:05:24 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.010 14:05:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:15.010 ************************************ 00:03:15.010 START TEST hugepages 00:03:15.010 ************************************ 00:03:15.010 14:05:24 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:15.010 * Looking for test storage... 00:03:15.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43317464 kB' 'MemAvailable: 46870236 kB' 'Buffers: 2724 kB' 'Cached: 10560268 kB' 'SwapCached: 0 kB' 'Active: 7642856 kB' 'Inactive: 3518424 kB' 'Active(anon): 7212356 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 602076 kB' 'Mapped: 195704 kB' 'Shmem: 6614068 kB' 'KReclaimable: 194940 kB' 'Slab: 566272 kB' 'SReclaimable: 194940 kB' 'SUnreclaim: 371332 kB' 'KernelStack: 12992 kB' 'PageTables: 8620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562320 kB' 'Committed_AS: 8339548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.010 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.011 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.012 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:15.013 14:05:24 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:15.013 14:05:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:15.013 14:05:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.013 14:05:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:15.013 ************************************ 00:03:15.013 START TEST default_setup 00:03:15.013 ************************************ 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.013 14:05:24 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:15.946 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:15.946 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:16.205 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:16.205 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:16.205 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:16.205 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:16.205 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:16.205 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:16.205 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:16.205 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:16.205 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:16.205 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:16.205 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:16.205 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:16.205 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:16.205 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:17.140 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45418224 kB' 'MemAvailable: 48970996 kB' 'Buffers: 2724 kB' 'Cached: 10560352 kB' 'SwapCached: 0 kB' 'Active: 7661012 kB' 'Inactive: 3518424 kB' 'Active(anon): 7230512 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619564 kB' 'Mapped: 195848 kB' 'Shmem: 6614152 kB' 'KReclaimable: 194940 kB' 'Slab: 566040 kB' 'SReclaimable: 194940 kB' 'SUnreclaim: 371100 kB' 'KernelStack: 12704 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8360076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45420436 kB' 'MemAvailable: 48973208 kB' 'Buffers: 2724 kB' 'Cached: 10560352 kB' 'SwapCached: 0 kB' 'Active: 7661440 kB' 'Inactive: 3518424 kB' 'Active(anon): 7230940 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620012 kB' 'Mapped: 195832 kB' 'Shmem: 6614152 kB' 'KReclaimable: 194940 kB' 'Slab: 566000 kB' 'SReclaimable: 194940 kB' 'SUnreclaim: 371060 kB' 'KernelStack: 12848 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8360096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.142 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.404 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45420416 kB' 'MemAvailable: 48973188 kB' 'Buffers: 2724 kB' 'Cached: 10560372 kB' 'SwapCached: 0 kB' 'Active: 7660980 kB' 'Inactive: 3518424 kB' 'Active(anon): 7230480 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619492 kB' 'Mapped: 195756 kB' 'Shmem: 6614172 kB' 'KReclaimable: 194940 kB' 'Slab: 566016 kB' 'SReclaimable: 194940 kB' 'SUnreclaim: 371076 kB' 'KernelStack: 12848 kB' 'PageTables: 8324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8360116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:17.405 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.406 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:17.407 nr_hugepages=1024 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:17.407 resv_hugepages=0 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:17.407 surplus_hugepages=0 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:17.407 anon_hugepages=0 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45420900 kB' 'MemAvailable: 48973672 kB' 'Buffers: 2724 kB' 'Cached: 10560392 kB' 'SwapCached: 0 kB' 'Active: 7661056 kB' 'Inactive: 3518424 kB' 'Active(anon): 7230556 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619568 kB' 'Mapped: 195756 kB' 'Shmem: 6614192 kB' 'KReclaimable: 194940 kB' 'Slab: 566016 kB' 'SReclaimable: 194940 kB' 'SUnreclaim: 371076 kB' 'KernelStack: 12880 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8360136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.407 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.408 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21298980 kB' 'MemUsed: 11530904 kB' 'SwapCached: 0 kB' 'Active: 5198444 kB' 'Inactive: 3242044 kB' 'Active(anon): 5085376 kB' 'Inactive(anon): 0 kB' 'Active(file): 113068 kB' 'Inactive(file): 3242044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8051760 kB' 'Mapped: 47680 kB' 'AnonPages: 391892 kB' 'Shmem: 4696648 kB' 'KernelStack: 7432 kB' 'PageTables: 4824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95836 kB' 'Slab: 317776 kB' 'SReclaimable: 95836 kB' 'SUnreclaim: 221940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.409 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:17.410 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.411 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.411 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.411 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.411 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:17.411 node0=1024 expecting 1024 00:03:17.411 14:05:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:17.411 00:03:17.411 real 0m2.437s 00:03:17.411 user 0m0.654s 00:03:17.411 sys 0m0.896s 00:03:17.411 14:05:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:17.411 14:05:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:17.411 ************************************ 00:03:17.411 END TEST default_setup 00:03:17.411 ************************************ 00:03:17.411 14:05:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:17.411 14:05:26 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:17.411 14:05:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:17.411 14:05:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:17.411 14:05:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:17.411 ************************************ 00:03:17.411 START TEST per_node_1G_alloc 00:03:17.411 ************************************ 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.411 14:05:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:18.790 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:18.790 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:18.790 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:18.790 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:18.790 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:18.790 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:18.790 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:18.790 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:18.790 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:18.790 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:18.790 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:18.790 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:18.790 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:18.790 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:18.790 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:18.790 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:18.790 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:18.790 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:18.790 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:18.790 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:18.790 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.790 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.790 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:18.790 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:18.790 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:18.790 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.790 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.790 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.790 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.790 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.790 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.790 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.790 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.790 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.790 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.790 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.790 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.790 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45419012 kB' 'MemAvailable: 48971784 kB' 'Buffers: 2724 kB' 'Cached: 10560468 kB' 'SwapCached: 0 kB' 'Active: 7661404 kB' 'Inactive: 3518424 kB' 'Active(anon): 7230904 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619800 kB' 'Mapped: 195812 kB' 'Shmem: 6614268 kB' 'KReclaimable: 194940 kB' 'Slab: 566124 kB' 'SReclaimable: 194940 kB' 'SUnreclaim: 371184 kB' 'KernelStack: 12832 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8359956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.791 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45428012 kB' 'MemAvailable: 48980784 kB' 'Buffers: 2724 kB' 'Cached: 10560472 kB' 'SwapCached: 0 kB' 'Active: 7661020 kB' 'Inactive: 3518424 kB' 'Active(anon): 7230520 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619352 kB' 'Mapped: 195772 kB' 'Shmem: 6614272 kB' 'KReclaimable: 194940 kB' 'Slab: 566180 kB' 'SReclaimable: 194940 kB' 'SUnreclaim: 371240 kB' 'KernelStack: 12864 kB' 'PageTables: 8324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8359976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.792 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.793 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45428404 kB' 'MemAvailable: 48981176 kB' 'Buffers: 2724 kB' 'Cached: 10560492 kB' 'SwapCached: 0 kB' 'Active: 7661404 kB' 'Inactive: 3518424 kB' 'Active(anon): 7230904 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619688 kB' 'Mapped: 195772 kB' 'Shmem: 6614292 kB' 'KReclaimable: 194940 kB' 'Slab: 566180 kB' 'SReclaimable: 194940 kB' 'SUnreclaim: 371240 kB' 'KernelStack: 12896 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8360364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.794 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.795 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:18.796 nr_hugepages=1024 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.796 resv_hugepages=0 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.796 surplus_hugepages=0 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.796 anon_hugepages=0 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45428296 kB' 'MemAvailable: 48981068 kB' 'Buffers: 2724 kB' 'Cached: 10560516 kB' 'SwapCached: 0 kB' 'Active: 7661392 kB' 'Inactive: 3518424 kB' 'Active(anon): 7230892 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619752 kB' 'Mapped: 195772 kB' 'Shmem: 6614316 kB' 'KReclaimable: 194940 kB' 'Slab: 566136 kB' 'SReclaimable: 194940 kB' 'SUnreclaim: 371196 kB' 'KernelStack: 12896 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8360388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.796 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.797 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22351820 kB' 'MemUsed: 10478064 kB' 'SwapCached: 0 kB' 'Active: 5198092 kB' 'Inactive: 3242044 kB' 'Active(anon): 5085024 kB' 'Inactive(anon): 0 kB' 'Active(file): 113068 kB' 'Inactive(file): 3242044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8051804 kB' 'Mapped: 47696 kB' 'AnonPages: 391408 kB' 'Shmem: 4696692 kB' 'KernelStack: 7432 kB' 'PageTables: 4764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95836 kB' 'Slab: 317788 kB' 'SReclaimable: 95836 kB' 'SUnreclaim: 221952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.798 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 23076432 kB' 'MemUsed: 4635420 kB' 'SwapCached: 0 kB' 'Active: 2463368 kB' 'Inactive: 276380 kB' 'Active(anon): 2145936 kB' 'Inactive(anon): 0 kB' 'Active(file): 317432 kB' 'Inactive(file): 276380 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2511480 kB' 'Mapped: 148076 kB' 'AnonPages: 228360 kB' 'Shmem: 1917668 kB' 'KernelStack: 5464 kB' 'PageTables: 3672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 99104 kB' 'Slab: 248348 kB' 'SReclaimable: 99104 kB' 'SUnreclaim: 149244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.799 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:18.800 node0=512 expecting 512 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.800 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:18.800 node1=512 expecting 512 00:03:18.801 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:18.801 00:03:18.801 real 0m1.442s 00:03:18.801 user 0m0.586s 00:03:18.801 sys 0m0.819s 00:03:18.801 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:18.801 14:05:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:18.801 ************************************ 00:03:18.801 END TEST per_node_1G_alloc 00:03:18.801 ************************************ 00:03:18.801 14:05:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:18.801 14:05:28 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:18.801 14:05:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.801 14:05:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.801 14:05:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:18.801 ************************************ 00:03:18.801 START TEST even_2G_alloc 00:03:18.801 ************************************ 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.801 14:05:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.178 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:20.178 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:20.178 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:20.178 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:20.178 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:20.178 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:20.178 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:20.178 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:20.178 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:20.178 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:20.178 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:20.178 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:20.178 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:20.178 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:20.178 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:20.178 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:20.178 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:20.178 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:20.178 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45429852 kB' 'MemAvailable: 48982624 kB' 'Buffers: 2724 kB' 'Cached: 10560608 kB' 'SwapCached: 0 kB' 'Active: 7661244 kB' 'Inactive: 3518424 kB' 'Active(anon): 7230744 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619608 kB' 'Mapped: 195780 kB' 'Shmem: 6614408 kB' 'KReclaimable: 194940 kB' 'Slab: 566568 kB' 'SReclaimable: 194940 kB' 'SUnreclaim: 371628 kB' 'KernelStack: 12912 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8360752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.179 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45430264 kB' 'MemAvailable: 48983036 kB' 'Buffers: 2724 kB' 'Cached: 10560612 kB' 'SwapCached: 0 kB' 'Active: 7661552 kB' 'Inactive: 3518424 kB' 'Active(anon): 7231052 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619944 kB' 'Mapped: 195780 kB' 'Shmem: 6614412 kB' 'KReclaimable: 194940 kB' 'Slab: 566536 kB' 'SReclaimable: 194940 kB' 'SUnreclaim: 371596 kB' 'KernelStack: 12944 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8360772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.180 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.181 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45430604 kB' 'MemAvailable: 48983376 kB' 'Buffers: 2724 kB' 'Cached: 10560628 kB' 'SwapCached: 0 kB' 'Active: 7661444 kB' 'Inactive: 3518424 kB' 'Active(anon): 7230944 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619820 kB' 'Mapped: 195780 kB' 'Shmem: 6614428 kB' 'KReclaimable: 194940 kB' 'Slab: 566588 kB' 'SReclaimable: 194940 kB' 'SUnreclaim: 371648 kB' 'KernelStack: 12928 kB' 'PageTables: 8440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8360792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.183 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.184 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.444 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:20.445 nr_hugepages=1024 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.445 resv_hugepages=0 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.445 surplus_hugepages=0 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.445 anon_hugepages=0 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45431980 kB' 'MemAvailable: 48984752 kB' 'Buffers: 2724 kB' 'Cached: 10560652 kB' 'SwapCached: 0 kB' 'Active: 7661420 kB' 'Inactive: 3518424 kB' 'Active(anon): 7230920 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619820 kB' 'Mapped: 195780 kB' 'Shmem: 6614452 kB' 'KReclaimable: 194940 kB' 'Slab: 566588 kB' 'SReclaimable: 194940 kB' 'SUnreclaim: 371648 kB' 'KernelStack: 12928 kB' 'PageTables: 8440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8360816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.446 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22349068 kB' 'MemUsed: 10480816 kB' 'SwapCached: 0 kB' 'Active: 5197884 kB' 'Inactive: 3242044 kB' 'Active(anon): 5084816 kB' 'Inactive(anon): 0 kB' 'Active(file): 113068 kB' 'Inactive(file): 3242044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8051804 kB' 'Mapped: 47704 kB' 'AnonPages: 391316 kB' 'Shmem: 4696692 kB' 'KernelStack: 7464 kB' 'PageTables: 4772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95836 kB' 'Slab: 317980 kB' 'SReclaimable: 95836 kB' 'SUnreclaim: 222144 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.447 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 23083620 kB' 'MemUsed: 4628232 kB' 'SwapCached: 0 kB' 'Active: 2463652 kB' 'Inactive: 276380 kB' 'Active(anon): 2146220 kB' 'Inactive(anon): 0 kB' 'Active(file): 317432 kB' 'Inactive(file): 276380 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2511616 kB' 'Mapped: 148076 kB' 'AnonPages: 228524 kB' 'Shmem: 1917804 kB' 'KernelStack: 5464 kB' 'PageTables: 3668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 99104 kB' 'Slab: 248608 kB' 'SReclaimable: 99104 kB' 'SUnreclaim: 149504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.448 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:20.449 node0=512 expecting 512 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:20.449 node1=512 expecting 512 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:20.449 00:03:20.449 real 0m1.487s 00:03:20.449 user 0m0.608s 00:03:20.449 sys 0m0.839s 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.449 14:05:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:20.449 ************************************ 00:03:20.449 END TEST even_2G_alloc 00:03:20.449 ************************************ 00:03:20.449 14:05:29 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:20.449 14:05:29 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:20.449 14:05:29 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.449 14:05:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.449 14:05:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.449 ************************************ 00:03:20.449 START TEST odd_alloc 00:03:20.449 ************************************ 00:03:20.449 14:05:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:20.449 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:20.449 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:20.449 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:20.449 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.449 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:20.449 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:20.449 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:20.449 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.449 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:20.449 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.449 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.449 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.449 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:20.449 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:20.449 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.449 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:20.449 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:20.450 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:20.450 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.450 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:20.450 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:20.450 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:20.450 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.450 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:20.450 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:20.450 14:05:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:20.450 14:05:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.450 14:05:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.383 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:21.383 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.383 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:21.383 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:21.383 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:21.383 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:21.383 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:21.383 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:21.383 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:21.383 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:21.383 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:21.383 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:21.383 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:21.383 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:21.383 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:21.383 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:21.383 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45412040 kB' 'MemAvailable: 48964796 kB' 'Buffers: 2724 kB' 'Cached: 10560736 kB' 'SwapCached: 0 kB' 'Active: 7659252 kB' 'Inactive: 3518424 kB' 'Active(anon): 7228752 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617460 kB' 'Mapped: 194968 kB' 'Shmem: 6614536 kB' 'KReclaimable: 194908 kB' 'Slab: 566188 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 371280 kB' 'KernelStack: 12928 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 8349520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.647 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.648 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.649 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.649 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.649 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.649 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.649 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.649 14:05:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:21.649 14:05:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45410492 kB' 'MemAvailable: 48963248 kB' 'Buffers: 2724 kB' 'Cached: 10560740 kB' 'SwapCached: 0 kB' 'Active: 7660348 kB' 'Inactive: 3518424 kB' 'Active(anon): 7229848 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618524 kB' 'Mapped: 194956 kB' 'Shmem: 6614540 kB' 'KReclaimable: 194908 kB' 'Slab: 566188 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 371280 kB' 'KernelStack: 13296 kB' 'PageTables: 9548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 8348176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196464 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.649 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.650 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45410272 kB' 'MemAvailable: 48963028 kB' 'Buffers: 2724 kB' 'Cached: 10560748 kB' 'SwapCached: 0 kB' 'Active: 7662052 kB' 'Inactive: 3518424 kB' 'Active(anon): 7231552 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620240 kB' 'Mapped: 194964 kB' 'Shmem: 6614548 kB' 'KReclaimable: 194908 kB' 'Slab: 566188 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 371280 kB' 'KernelStack: 13536 kB' 'PageTables: 10324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 8349560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196256 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.651 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:21.652 nr_hugepages=1025 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.652 resv_hugepages=0 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.652 surplus_hugepages=0 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.652 anon_hugepages=0 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.652 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45409184 kB' 'MemAvailable: 48961940 kB' 'Buffers: 2724 kB' 'Cached: 10560780 kB' 'SwapCached: 0 kB' 'Active: 7661256 kB' 'Inactive: 3518424 kB' 'Active(anon): 7230756 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619428 kB' 'Mapped: 194964 kB' 'Shmem: 6614580 kB' 'KReclaimable: 194908 kB' 'Slab: 566164 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 371256 kB' 'KernelStack: 13296 kB' 'PageTables: 9772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 8349580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196416 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.653 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22336812 kB' 'MemUsed: 10493072 kB' 'SwapCached: 0 kB' 'Active: 5196308 kB' 'Inactive: 3242044 kB' 'Active(anon): 5083240 kB' 'Inactive(anon): 0 kB' 'Active(file): 113068 kB' 'Inactive(file): 3242044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8051808 kB' 'Mapped: 47100 kB' 'AnonPages: 389668 kB' 'Shmem: 4696696 kB' 'KernelStack: 7576 kB' 'PageTables: 4768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95836 kB' 'Slab: 317816 kB' 'SReclaimable: 95836 kB' 'SUnreclaim: 221980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.654 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.655 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 23075028 kB' 'MemUsed: 4636824 kB' 'SwapCached: 0 kB' 'Active: 2463960 kB' 'Inactive: 276380 kB' 'Active(anon): 2146528 kB' 'Inactive(anon): 0 kB' 'Active(file): 317432 kB' 'Inactive(file): 276380 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2511716 kB' 'Mapped: 147856 kB' 'AnonPages: 228784 kB' 'Shmem: 1917904 kB' 'KernelStack: 5448 kB' 'PageTables: 3588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 99072 kB' 'Slab: 248364 kB' 'SReclaimable: 99072 kB' 'SUnreclaim: 149292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.656 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.914 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.915 14:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:21.915 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.915 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.915 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.915 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.915 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:21.915 node0=512 expecting 513 00:03:21.915 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.915 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.915 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.915 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:21.915 node1=513 expecting 512 00:03:21.915 14:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:21.915 00:03:21.915 real 0m1.348s 00:03:21.915 user 0m0.577s 00:03:21.915 sys 0m0.726s 00:03:21.915 14:05:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:21.915 14:05:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:21.915 ************************************ 00:03:21.915 END TEST odd_alloc 00:03:21.915 ************************************ 00:03:21.915 14:05:31 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:21.915 14:05:31 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:21.915 14:05:31 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:21.915 14:05:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.915 14:05:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:21.915 ************************************ 00:03:21.915 START TEST custom_alloc 00:03:21.915 ************************************ 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.915 14:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.849 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:22.849 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:22.849 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:22.849 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:22.849 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:22.849 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:22.849 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:22.849 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:22.849 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:22.849 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:22.849 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:22.849 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:22.849 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:22.849 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:22.849 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:22.849 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:22.849 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:23.113 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 44389768 kB' 'MemAvailable: 47942524 kB' 'Buffers: 2724 kB' 'Cached: 10560868 kB' 'SwapCached: 0 kB' 'Active: 7659612 kB' 'Inactive: 3518424 kB' 'Active(anon): 7229112 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617720 kB' 'Mapped: 194968 kB' 'Shmem: 6614668 kB' 'KReclaimable: 194908 kB' 'Slab: 566004 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 371096 kB' 'KernelStack: 12848 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 8346924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 44392296 kB' 'MemAvailable: 47945052 kB' 'Buffers: 2724 kB' 'Cached: 10560872 kB' 'SwapCached: 0 kB' 'Active: 7658964 kB' 'Inactive: 3518424 kB' 'Active(anon): 7228464 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616972 kB' 'Mapped: 195012 kB' 'Shmem: 6614672 kB' 'KReclaimable: 194908 kB' 'Slab: 565984 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 371076 kB' 'KernelStack: 12768 kB' 'PageTables: 7724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 8347072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.116 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 44392348 kB' 'MemAvailable: 47945104 kB' 'Buffers: 2724 kB' 'Cached: 10560892 kB' 'SwapCached: 0 kB' 'Active: 7659272 kB' 'Inactive: 3518424 kB' 'Active(anon): 7228772 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617364 kB' 'Mapped: 194952 kB' 'Shmem: 6614692 kB' 'KReclaimable: 194908 kB' 'Slab: 566048 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 371140 kB' 'KernelStack: 12880 kB' 'PageTables: 8056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 8347464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.118 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:23.119 nr_hugepages=1536 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.119 resv_hugepages=0 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.119 surplus_hugepages=0 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.119 anon_hugepages=0 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 44392348 kB' 'MemAvailable: 47945104 kB' 'Buffers: 2724 kB' 'Cached: 10560916 kB' 'SwapCached: 0 kB' 'Active: 7659288 kB' 'Inactive: 3518424 kB' 'Active(anon): 7228788 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617356 kB' 'Mapped: 194952 kB' 'Shmem: 6614716 kB' 'KReclaimable: 194908 kB' 'Slab: 566048 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 371140 kB' 'KernelStack: 12880 kB' 'PageTables: 8056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 8347484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22363980 kB' 'MemUsed: 10465904 kB' 'SwapCached: 0 kB' 'Active: 5195808 kB' 'Inactive: 3242044 kB' 'Active(anon): 5082740 kB' 'Inactive(anon): 0 kB' 'Active(file): 113068 kB' 'Inactive(file): 3242044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8051812 kB' 'Mapped: 47112 kB' 'AnonPages: 389140 kB' 'Shmem: 4696700 kB' 'KernelStack: 7416 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95836 kB' 'Slab: 317688 kB' 'SReclaimable: 95836 kB' 'SUnreclaim: 221852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 22028116 kB' 'MemUsed: 5683736 kB' 'SwapCached: 0 kB' 'Active: 2463976 kB' 'Inactive: 276380 kB' 'Active(anon): 2146544 kB' 'Inactive(anon): 0 kB' 'Active(file): 317432 kB' 'Inactive(file): 276380 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2511872 kB' 'Mapped: 147840 kB' 'AnonPages: 228712 kB' 'Shmem: 1918060 kB' 'KernelStack: 5496 kB' 'PageTables: 3644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 99072 kB' 'Slab: 248360 kB' 'SReclaimable: 99072 kB' 'SUnreclaim: 149288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.122 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:23.123 node0=512 expecting 512 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:23.123 node1=1024 expecting 1024 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:23.123 00:03:23.123 real 0m1.352s 00:03:23.123 user 0m0.595s 00:03:23.123 sys 0m0.712s 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:23.123 14:05:32 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:23.123 ************************************ 00:03:23.123 END TEST custom_alloc 00:03:23.123 ************************************ 00:03:23.123 14:05:32 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:23.123 14:05:32 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:23.123 14:05:32 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:23.123 14:05:32 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.123 14:05:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:23.123 ************************************ 00:03:23.123 START TEST no_shrink_alloc 00:03:23.123 ************************************ 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.124 14:05:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.512 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:24.512 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:24.512 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:24.512 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:24.512 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:24.512 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:24.512 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:24.512 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:24.512 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:24.512 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:24.512 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:24.512 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:24.512 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:24.512 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:24.512 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:24.512 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:24.512 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:24.512 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:24.512 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:24.512 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.512 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.512 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:24.512 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:24.512 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:24.512 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.512 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.512 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.512 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:24.512 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:24.512 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.512 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.512 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.512 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.512 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.512 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.512 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.512 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45420492 kB' 'MemAvailable: 48973248 kB' 'Buffers: 2724 kB' 'Cached: 10560996 kB' 'SwapCached: 0 kB' 'Active: 7659576 kB' 'Inactive: 3518424 kB' 'Active(anon): 7229076 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617456 kB' 'Mapped: 194984 kB' 'Shmem: 6614796 kB' 'KReclaimable: 194908 kB' 'Slab: 565876 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 370968 kB' 'KernelStack: 12896 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8347924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.513 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45419688 kB' 'MemAvailable: 48972444 kB' 'Buffers: 2724 kB' 'Cached: 10560996 kB' 'SwapCached: 0 kB' 'Active: 7659728 kB' 'Inactive: 3518424 kB' 'Active(anon): 7229228 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617676 kB' 'Mapped: 195040 kB' 'Shmem: 6614796 kB' 'KReclaimable: 194908 kB' 'Slab: 565924 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 371016 kB' 'KernelStack: 12864 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8347940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.514 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.515 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45418032 kB' 'MemAvailable: 48970788 kB' 'Buffers: 2724 kB' 'Cached: 10561020 kB' 'SwapCached: 0 kB' 'Active: 7662372 kB' 'Inactive: 3518424 kB' 'Active(anon): 7231872 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620248 kB' 'Mapped: 195496 kB' 'Shmem: 6614820 kB' 'KReclaimable: 194908 kB' 'Slab: 565948 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 371040 kB' 'KernelStack: 12912 kB' 'PageTables: 8056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8351168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.516 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.517 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.518 nr_hugepages=1024 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.518 resv_hugepages=0 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.518 surplus_hugepages=0 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.518 anon_hugepages=0 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45413056 kB' 'MemAvailable: 48965812 kB' 'Buffers: 2724 kB' 'Cached: 10561040 kB' 'SwapCached: 0 kB' 'Active: 7664884 kB' 'Inactive: 3518424 kB' 'Active(anon): 7234384 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622800 kB' 'Mapped: 195496 kB' 'Shmem: 6614840 kB' 'KReclaimable: 194908 kB' 'Slab: 565948 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 371040 kB' 'KernelStack: 12928 kB' 'PageTables: 8096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8354104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196116 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.518 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.519 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21301860 kB' 'MemUsed: 11528024 kB' 'SwapCached: 0 kB' 'Active: 5195696 kB' 'Inactive: 3242044 kB' 'Active(anon): 5082628 kB' 'Inactive(anon): 0 kB' 'Active(file): 113068 kB' 'Inactive(file): 3242044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8051820 kB' 'Mapped: 47124 kB' 'AnonPages: 388996 kB' 'Shmem: 4696708 kB' 'KernelStack: 7432 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95836 kB' 'Slab: 317600 kB' 'SReclaimable: 95836 kB' 'SUnreclaim: 221764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.520 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.779 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.780 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.780 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.780 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:24.780 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.780 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.780 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.780 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.780 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:24.780 node0=1024 expecting 1024 00:03:24.780 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:24.780 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:24.780 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:24.780 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:24.780 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.780 14:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.721 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:25.721 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.721 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:25.721 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:25.721 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:25.721 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:25.721 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:25.721 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:25.721 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:25.721 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:25.721 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:25.721 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:25.721 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:25.721 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:25.721 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:25.721 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:25.721 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:25.984 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45423508 kB' 'MemAvailable: 48976264 kB' 'Buffers: 2724 kB' 'Cached: 10561108 kB' 'SwapCached: 0 kB' 'Active: 7660228 kB' 'Inactive: 3518424 kB' 'Active(anon): 7229728 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618140 kB' 'Mapped: 194992 kB' 'Shmem: 6614908 kB' 'KReclaimable: 194908 kB' 'Slab: 565904 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 370996 kB' 'KernelStack: 12928 kB' 'PageTables: 8080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8348164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.984 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.985 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45424284 kB' 'MemAvailable: 48977040 kB' 'Buffers: 2724 kB' 'Cached: 10561112 kB' 'SwapCached: 0 kB' 'Active: 7659916 kB' 'Inactive: 3518424 kB' 'Active(anon): 7229416 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617788 kB' 'Mapped: 194968 kB' 'Shmem: 6614912 kB' 'KReclaimable: 194908 kB' 'Slab: 565908 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 371000 kB' 'KernelStack: 12960 kB' 'PageTables: 8096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8348180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.986 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.987 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45424584 kB' 'MemAvailable: 48977340 kB' 'Buffers: 2724 kB' 'Cached: 10561116 kB' 'SwapCached: 0 kB' 'Active: 7659616 kB' 'Inactive: 3518424 kB' 'Active(anon): 7229116 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617480 kB' 'Mapped: 194968 kB' 'Shmem: 6614916 kB' 'KReclaimable: 194908 kB' 'Slab: 565964 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 371056 kB' 'KernelStack: 12960 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8348204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.988 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:25.989 nr_hugepages=1024 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.989 resv_hugepages=0 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.989 surplus_hugepages=0 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.989 anon_hugepages=0 00:03:25.989 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45424584 kB' 'MemAvailable: 48977340 kB' 'Buffers: 2724 kB' 'Cached: 10561152 kB' 'SwapCached: 0 kB' 'Active: 7659924 kB' 'Inactive: 3518424 kB' 'Active(anon): 7229424 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617760 kB' 'Mapped: 194968 kB' 'Shmem: 6614952 kB' 'KReclaimable: 194908 kB' 'Slab: 565956 kB' 'SReclaimable: 194908 kB' 'SUnreclaim: 371048 kB' 'KernelStack: 12960 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8348224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1906268 kB' 'DirectMap2M: 17936384 kB' 'DirectMap1G: 49283072 kB' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.990 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21314660 kB' 'MemUsed: 11515224 kB' 'SwapCached: 0 kB' 'Active: 5196160 kB' 'Inactive: 3242044 kB' 'Active(anon): 5083092 kB' 'Inactive(anon): 0 kB' 'Active(file): 113068 kB' 'Inactive(file): 3242044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8051828 kB' 'Mapped: 47128 kB' 'AnonPages: 389516 kB' 'Shmem: 4696716 kB' 'KernelStack: 7512 kB' 'PageTables: 4624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95836 kB' 'Slab: 317688 kB' 'SReclaimable: 95836 kB' 'SUnreclaim: 221852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.991 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.992 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:25.993 node0=1024 expecting 1024 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:25.993 00:03:25.993 real 0m2.805s 00:03:25.993 user 0m1.173s 00:03:25.993 sys 0m1.553s 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.993 14:05:35 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:25.993 ************************************ 00:03:25.993 END TEST no_shrink_alloc 00:03:25.993 ************************************ 00:03:25.993 14:05:35 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:25.993 14:05:35 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:25.993 14:05:35 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:25.993 14:05:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:25.993 14:05:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.993 14:05:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.993 14:05:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.993 14:05:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.993 14:05:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:25.993 14:05:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.993 14:05:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.993 14:05:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.993 14:05:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.993 14:05:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:25.993 14:05:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:25.993 00:03:25.993 real 0m11.279s 00:03:25.993 user 0m4.366s 00:03:25.993 sys 0m5.805s 00:03:25.993 14:05:35 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.993 14:05:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:25.993 ************************************ 00:03:25.993 END TEST hugepages 00:03:25.993 ************************************ 00:03:25.993 14:05:35 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:25.993 14:05:35 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:25.993 14:05:35 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.993 14:05:35 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.993 14:05:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:26.250 ************************************ 00:03:26.250 START TEST driver 00:03:26.250 ************************************ 00:03:26.250 14:05:35 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:26.250 * Looking for test storage... 00:03:26.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:26.250 14:05:35 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:26.251 14:05:35 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.251 14:05:35 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.780 14:05:37 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:28.780 14:05:37 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.780 14:05:37 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.780 14:05:37 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:28.780 ************************************ 00:03:28.780 START TEST guess_driver 00:03:28.780 ************************************ 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:28.780 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:28.780 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:28.780 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:28.780 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:28.780 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:28.780 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:28.780 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:28.780 Looking for driver=vfio-pci 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.780 14:05:37 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:29.712 14:05:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.712 14:05:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.712 14:05:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.712 14:05:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.712 14:05:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.712 14:05:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.712 14:05:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.712 14:05:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.712 14:05:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.712 14:05:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.712 14:05:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.712 14:05:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.712 14:05:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.712 14:05:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.712 14:05:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.713 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:30.646 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:30.646 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:30.646 14:05:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:30.903 14:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:30.903 14:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:30.903 14:05:40 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.903 14:05:40 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.428 00:03:33.428 real 0m4.662s 00:03:33.428 user 0m1.027s 00:03:33.428 sys 0m1.777s 00:03:33.428 14:05:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.428 14:05:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:33.428 ************************************ 00:03:33.428 END TEST guess_driver 00:03:33.428 ************************************ 00:03:33.428 14:05:42 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:33.428 00:03:33.428 real 0m7.085s 00:03:33.428 user 0m1.560s 00:03:33.428 sys 0m2.680s 00:03:33.428 14:05:42 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.428 14:05:42 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:33.428 ************************************ 00:03:33.428 END TEST driver 00:03:33.428 ************************************ 00:03:33.428 14:05:42 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:33.428 14:05:42 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:33.428 14:05:42 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.428 14:05:42 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.428 14:05:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:33.428 ************************************ 00:03:33.428 START TEST devices 00:03:33.428 ************************************ 00:03:33.428 14:05:42 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:33.428 * Looking for test storage... 00:03:33.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:33.428 14:05:42 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:33.428 14:05:42 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:33.428 14:05:42 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:33.428 14:05:42 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.800 14:05:44 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:34.800 14:05:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:34.800 14:05:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:34.800 14:05:44 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:34.800 14:05:44 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:34.800 14:05:44 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:34.800 14:05:44 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:34.800 14:05:44 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:34.800 14:05:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:34.800 14:05:44 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:34.800 14:05:44 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:34.800 14:05:44 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:34.800 14:05:44 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:34.800 14:05:44 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:34.800 14:05:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:34.800 14:05:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:34.800 14:05:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:34.800 14:05:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:34.800 14:05:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:34.800 14:05:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:34.800 14:05:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:34.800 14:05:44 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:34.800 No valid GPT data, bailing 00:03:34.800 14:05:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:34.800 14:05:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:34.800 14:05:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:34.800 14:05:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:34.800 14:05:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:34.800 14:05:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:34.800 14:05:44 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:34.800 14:05:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:34.800 14:05:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:34.800 14:05:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:34.800 14:05:44 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:34.800 14:05:44 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:34.800 14:05:44 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:34.800 14:05:44 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.800 14:05:44 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.800 14:05:44 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:34.800 ************************************ 00:03:34.800 START TEST nvme_mount 00:03:34.800 ************************************ 00:03:34.800 14:05:44 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:34.800 14:05:44 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:34.800 14:05:44 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:34.800 14:05:44 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.800 14:05:44 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.800 14:05:44 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:34.800 14:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:34.800 14:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:34.800 14:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:34.800 14:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:34.800 14:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:34.800 14:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:34.800 14:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:34.800 14:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:34.800 14:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:34.800 14:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:34.800 14:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:34.800 14:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:34.800 14:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:34.800 14:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:35.736 Creating new GPT entries in memory. 00:03:35.736 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:35.736 other utilities. 00:03:35.736 14:05:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:35.736 14:05:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:35.736 14:05:45 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:35.736 14:05:45 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:35.736 14:05:45 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:36.671 Creating new GPT entries in memory. 00:03:36.671 The operation has completed successfully. 00:03:36.671 14:05:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:36.671 14:05:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:36.671 14:05:46 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1226710 00:03:36.671 14:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.671 14:05:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:36.671 14:05:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.671 14:05:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:36.671 14:05:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:36.929 14:05:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.930 14:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:36.930 14:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:36.930 14:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:36.930 14:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.930 14:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:36.930 14:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:36.930 14:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:36.930 14:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:36.930 14:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:36.930 14:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.930 14:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:36.930 14:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:36.930 14:05:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.930 14:05:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:37.864 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:38.124 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:38.124 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:38.124 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:38.124 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:38.124 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:38.124 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:38.383 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:38.383 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:38.383 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:38.383 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:38.383 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:38.383 14:05:47 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:38.383 14:05:47 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:38.383 14:05:47 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:38.383 14:05:47 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:38.383 14:05:47 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:38.383 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:38.383 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:38.383 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:38.383 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:38.383 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:38.383 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:38.383 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:38.383 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:38.383 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:38.383 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.383 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:38.383 14:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:38.383 14:05:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.383 14:05:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.320 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.579 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:39.579 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:39.580 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.580 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:39.580 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:39.580 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.580 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:39.580 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:39.580 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:39.580 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:39.580 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:39.580 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:39.580 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:39.580 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:39.580 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.580 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:39.580 14:05:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:39.580 14:05:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.580 14:05:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:40.514 14:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.824 14:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:40.824 14:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:40.824 14:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:40.824 14:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:40.824 14:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.824 14:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:40.824 14:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:40.824 14:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:40.824 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:40.824 00:03:40.824 real 0m6.034s 00:03:40.824 user 0m1.346s 00:03:40.824 sys 0m2.252s 00:03:40.824 14:05:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.824 14:05:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:40.824 ************************************ 00:03:40.824 END TEST nvme_mount 00:03:40.824 ************************************ 00:03:40.824 14:05:50 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:40.824 14:05:50 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:40.824 14:05:50 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.824 14:05:50 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.824 14:05:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:40.824 ************************************ 00:03:40.824 START TEST dm_mount 00:03:40.824 ************************************ 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:40.824 14:05:50 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:41.784 Creating new GPT entries in memory. 00:03:41.784 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:41.784 other utilities. 00:03:41.784 14:05:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:41.784 14:05:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:41.784 14:05:51 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:41.784 14:05:51 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:41.784 14:05:51 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:43.163 Creating new GPT entries in memory. 00:03:43.163 The operation has completed successfully. 00:03:43.163 14:05:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:43.163 14:05:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:43.163 14:05:52 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:43.163 14:05:52 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:43.163 14:05:52 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:44.101 The operation has completed successfully. 00:03:44.101 14:05:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:44.101 14:05:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:44.101 14:05:53 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1228979 00:03:44.101 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:44.101 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:44.101 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:44.101 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:44.101 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:44.101 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:44.101 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:44.101 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:44.101 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:44.101 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:44.101 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:44.101 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:44.101 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:44.101 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:44.101 14:05:53 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:44.101 14:05:53 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:44.101 14:05:53 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:44.101 14:05:53 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:44.102 14:05:53 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:44.102 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:44.102 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:44.102 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:44.102 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:44.102 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:44.102 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:44.102 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:44.102 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:44.102 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:44.102 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.102 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:44.102 14:05:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:44.102 14:05:53 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.102 14:05:53 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.036 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.294 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:45.294 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:45.294 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:45.294 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:45.294 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:45.294 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:45.294 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:45.294 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:45.294 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:45.294 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:45.294 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:45.294 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:45.294 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:45.294 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:45.294 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.294 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:45.294 14:05:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:45.294 14:05:54 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.294 14:05:54 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.230 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.488 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:46.488 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:46.488 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:46.488 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:46.488 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:46.488 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:46.488 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:46.488 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:46.488 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:46.488 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:46.488 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:46.488 14:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:46.488 00:03:46.488 real 0m5.765s 00:03:46.488 user 0m0.904s 00:03:46.488 sys 0m1.659s 00:03:46.488 14:05:55 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.488 14:05:55 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:46.488 ************************************ 00:03:46.488 END TEST dm_mount 00:03:46.488 ************************************ 00:03:46.488 14:05:55 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:46.488 14:05:55 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:46.746 14:05:55 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:46.746 14:05:55 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.746 14:05:55 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:46.746 14:05:55 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:46.746 14:05:55 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:46.746 14:05:55 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:47.004 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:47.004 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:47.004 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:47.004 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:47.004 14:05:56 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:47.004 14:05:56 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:47.004 14:05:56 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:47.004 14:05:56 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:47.004 14:05:56 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:47.004 14:05:56 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:47.004 14:05:56 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:47.004 00:03:47.004 real 0m13.651s 00:03:47.004 user 0m2.870s 00:03:47.004 sys 0m4.903s 00:03:47.004 14:05:56 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.004 14:05:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:47.004 ************************************ 00:03:47.004 END TEST devices 00:03:47.004 ************************************ 00:03:47.004 14:05:56 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:47.004 00:03:47.004 real 0m42.563s 00:03:47.004 user 0m11.985s 00:03:47.004 sys 0m18.728s 00:03:47.004 14:05:56 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.004 14:05:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:47.004 ************************************ 00:03:47.004 END TEST setup.sh 00:03:47.004 ************************************ 00:03:47.004 14:05:56 -- common/autotest_common.sh@1142 -- # return 0 00:03:47.004 14:05:56 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:47.938 Hugepages 00:03:47.938 node hugesize free / total 00:03:47.938 node0 1048576kB 0 / 0 00:03:47.938 node0 2048kB 2048 / 2048 00:03:47.938 node1 1048576kB 0 / 0 00:03:47.938 node1 2048kB 0 / 0 00:03:47.938 00:03:47.938 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:47.938 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:47.938 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:47.938 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:47.938 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:47.938 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:47.938 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:47.938 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:47.938 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:47.938 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:47.938 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:47.939 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:47.939 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:47.939 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:47.939 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:47.939 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:47.939 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:48.197 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:48.197 14:05:57 -- spdk/autotest.sh@130 -- # uname -s 00:03:48.197 14:05:57 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:48.197 14:05:57 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:48.197 14:05:57 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:49.132 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:49.132 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:49.132 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:49.132 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:49.132 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:49.132 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:49.132 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:49.132 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:49.132 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:49.132 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:49.132 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:49.132 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:49.132 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:49.391 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:49.391 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:49.391 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:50.328 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:50.328 14:05:59 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:51.263 14:06:00 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:51.263 14:06:00 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:51.263 14:06:00 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:51.263 14:06:00 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:51.263 14:06:00 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:51.263 14:06:00 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:51.263 14:06:00 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:51.263 14:06:00 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:51.263 14:06:00 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:51.263 14:06:00 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:51.263 14:06:00 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:51.263 14:06:00 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.638 Waiting for block devices as requested 00:03:52.638 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:52.638 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:52.638 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:52.897 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:52.897 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:52.897 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:52.897 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:53.155 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:53.155 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:53.155 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:53.155 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:53.414 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:53.414 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:53.414 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:53.414 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:53.695 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:53.695 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:53.695 14:06:03 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:53.695 14:06:03 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:53.695 14:06:03 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:53.695 14:06:03 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:03:53.695 14:06:03 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:53.695 14:06:03 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:53.695 14:06:03 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:53.695 14:06:03 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:53.695 14:06:03 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:53.695 14:06:03 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:53.695 14:06:03 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:53.695 14:06:03 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:53.695 14:06:03 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:53.695 14:06:03 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:53.695 14:06:03 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:53.695 14:06:03 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:53.695 14:06:03 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:53.695 14:06:03 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:53.695 14:06:03 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:53.695 14:06:03 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:53.695 14:06:03 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:53.695 14:06:03 -- common/autotest_common.sh@1557 -- # continue 00:03:53.695 14:06:03 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:53.695 14:06:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:53.695 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:03:53.695 14:06:03 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:53.695 14:06:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:53.695 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:03:53.695 14:06:03 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.076 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:55.076 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:55.076 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:55.076 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:55.076 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:55.076 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:55.076 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:55.076 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:55.076 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:55.076 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:55.076 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:55.076 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:55.076 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:55.076 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:55.076 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:55.076 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:56.013 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:56.013 14:06:05 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:56.013 14:06:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:56.013 14:06:05 -- common/autotest_common.sh@10 -- # set +x 00:03:56.272 14:06:05 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:56.272 14:06:05 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:56.272 14:06:05 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:56.272 14:06:05 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:56.272 14:06:05 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:56.272 14:06:05 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:56.272 14:06:05 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:56.272 14:06:05 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:56.272 14:06:05 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:56.272 14:06:05 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:56.272 14:06:05 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:56.272 14:06:05 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:56.272 14:06:05 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:56.272 14:06:05 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:56.272 14:06:05 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:56.272 14:06:05 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:56.272 14:06:05 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:56.272 14:06:05 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:56.272 14:06:05 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:03:56.272 14:06:05 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:03:56.272 14:06:05 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1234382 00:03:56.272 14:06:05 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.272 14:06:05 -- common/autotest_common.sh@1598 -- # waitforlisten 1234382 00:03:56.272 14:06:05 -- common/autotest_common.sh@829 -- # '[' -z 1234382 ']' 00:03:56.272 14:06:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.272 14:06:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:56.272 14:06:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.272 14:06:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:56.272 14:06:05 -- common/autotest_common.sh@10 -- # set +x 00:03:56.272 [2024-07-10 14:06:05.669475] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:03:56.272 [2024-07-10 14:06:05.669614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234382 ] 00:03:56.272 EAL: No free 2048 kB hugepages reported on node 1 00:03:56.530 [2024-07-10 14:06:05.795942] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.788 [2024-07-10 14:06:06.049963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.723 14:06:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:57.723 14:06:06 -- common/autotest_common.sh@862 -- # return 0 00:03:57.723 14:06:06 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:57.723 14:06:06 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:57.723 14:06:06 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:01.004 nvme0n1 00:04:01.004 14:06:10 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:01.004 [2024-07-10 14:06:10.276777] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:01.004 [2024-07-10 14:06:10.276850] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:01.004 request: 00:04:01.004 { 00:04:01.004 "nvme_ctrlr_name": "nvme0", 00:04:01.004 "password": "test", 00:04:01.004 "method": "bdev_nvme_opal_revert", 00:04:01.004 "req_id": 1 00:04:01.004 } 00:04:01.004 Got JSON-RPC error response 00:04:01.004 response: 00:04:01.004 { 00:04:01.004 "code": -32603, 00:04:01.004 "message": "Internal error" 00:04:01.004 } 00:04:01.004 14:06:10 -- common/autotest_common.sh@1604 -- # true 00:04:01.004 14:06:10 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:01.004 14:06:10 -- common/autotest_common.sh@1608 -- # killprocess 1234382 00:04:01.004 14:06:10 -- common/autotest_common.sh@948 -- # '[' -z 1234382 ']' 00:04:01.004 14:06:10 -- common/autotest_common.sh@952 -- # kill -0 1234382 00:04:01.004 14:06:10 -- common/autotest_common.sh@953 -- # uname 00:04:01.004 14:06:10 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:01.004 14:06:10 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1234382 00:04:01.004 14:06:10 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:01.004 14:06:10 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:01.004 14:06:10 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1234382' 00:04:01.004 killing process with pid 1234382 00:04:01.004 14:06:10 -- common/autotest_common.sh@967 -- # kill 1234382 00:04:01.004 14:06:10 -- common/autotest_common.sh@972 -- # wait 1234382 00:04:05.190 14:06:14 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:05.190 14:06:14 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:05.190 14:06:14 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:05.190 14:06:14 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:05.190 14:06:14 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:05.190 14:06:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:05.190 14:06:14 -- common/autotest_common.sh@10 -- # set +x 00:04:05.190 14:06:14 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:05.190 14:06:14 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:05.190 14:06:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.190 14:06:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.190 14:06:14 -- common/autotest_common.sh@10 -- # set +x 00:04:05.190 ************************************ 00:04:05.190 START TEST env 00:04:05.190 ************************************ 00:04:05.190 14:06:14 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:05.190 * Looking for test storage... 00:04:05.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:05.191 14:06:14 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:05.191 14:06:14 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.191 14:06:14 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.191 14:06:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.191 ************************************ 00:04:05.191 START TEST env_memory 00:04:05.191 ************************************ 00:04:05.191 14:06:14 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:05.191 00:04:05.191 00:04:05.191 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.191 http://cunit.sourceforge.net/ 00:04:05.191 00:04:05.191 00:04:05.191 Suite: memory 00:04:05.191 Test: alloc and free memory map ...[2024-07-10 14:06:14.213117] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:05.191 passed 00:04:05.191 Test: mem map translation ...[2024-07-10 14:06:14.254659] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:05.191 [2024-07-10 14:06:14.254716] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:05.191 [2024-07-10 14:06:14.254802] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:05.191 [2024-07-10 14:06:14.254834] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:05.191 passed 00:04:05.191 Test: mem map registration ...[2024-07-10 14:06:14.321744] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:05.191 [2024-07-10 14:06:14.321783] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:05.191 passed 00:04:05.191 Test: mem map adjacent registrations ...passed 00:04:05.191 00:04:05.191 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.191 suites 1 1 n/a 0 0 00:04:05.191 tests 4 4 4 0 0 00:04:05.191 asserts 152 152 152 0 n/a 00:04:05.191 00:04:05.191 Elapsed time = 0.241 seconds 00:04:05.191 00:04:05.191 real 0m0.261s 00:04:05.191 user 0m0.246s 00:04:05.191 sys 0m0.014s 00:04:05.191 14:06:14 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.191 14:06:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:05.191 ************************************ 00:04:05.191 END TEST env_memory 00:04:05.191 ************************************ 00:04:05.191 14:06:14 env -- common/autotest_common.sh@1142 -- # return 0 00:04:05.191 14:06:14 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:05.191 14:06:14 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.191 14:06:14 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.191 14:06:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.191 ************************************ 00:04:05.191 START TEST env_vtophys 00:04:05.191 ************************************ 00:04:05.191 14:06:14 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:05.191 EAL: lib.eal log level changed from notice to debug 00:04:05.191 EAL: Detected lcore 0 as core 0 on socket 0 00:04:05.191 EAL: Detected lcore 1 as core 1 on socket 0 00:04:05.191 EAL: Detected lcore 2 as core 2 on socket 0 00:04:05.191 EAL: Detected lcore 3 as core 3 on socket 0 00:04:05.191 EAL: Detected lcore 4 as core 4 on socket 0 00:04:05.191 EAL: Detected lcore 5 as core 5 on socket 0 00:04:05.191 EAL: Detected lcore 6 as core 8 on socket 0 00:04:05.191 EAL: Detected lcore 7 as core 9 on socket 0 00:04:05.191 EAL: Detected lcore 8 as core 10 on socket 0 00:04:05.191 EAL: Detected lcore 9 as core 11 on socket 0 00:04:05.191 EAL: Detected lcore 10 as core 12 on socket 0 00:04:05.191 EAL: Detected lcore 11 as core 13 on socket 0 00:04:05.191 EAL: Detected lcore 12 as core 0 on socket 1 00:04:05.191 EAL: Detected lcore 13 as core 1 on socket 1 00:04:05.191 EAL: Detected lcore 14 as core 2 on socket 1 00:04:05.191 EAL: Detected lcore 15 as core 3 on socket 1 00:04:05.191 EAL: Detected lcore 16 as core 4 on socket 1 00:04:05.191 EAL: Detected lcore 17 as core 5 on socket 1 00:04:05.191 EAL: Detected lcore 18 as core 8 on socket 1 00:04:05.191 EAL: Detected lcore 19 as core 9 on socket 1 00:04:05.191 EAL: Detected lcore 20 as core 10 on socket 1 00:04:05.191 EAL: Detected lcore 21 as core 11 on socket 1 00:04:05.191 EAL: Detected lcore 22 as core 12 on socket 1 00:04:05.191 EAL: Detected lcore 23 as core 13 on socket 1 00:04:05.191 EAL: Detected lcore 24 as core 0 on socket 0 00:04:05.191 EAL: Detected lcore 25 as core 1 on socket 0 00:04:05.191 EAL: Detected lcore 26 as core 2 on socket 0 00:04:05.191 EAL: Detected lcore 27 as core 3 on socket 0 00:04:05.191 EAL: Detected lcore 28 as core 4 on socket 0 00:04:05.191 EAL: Detected lcore 29 as core 5 on socket 0 00:04:05.191 EAL: Detected lcore 30 as core 8 on socket 0 00:04:05.191 EAL: Detected lcore 31 as core 9 on socket 0 00:04:05.191 EAL: Detected lcore 32 as core 10 on socket 0 00:04:05.191 EAL: Detected lcore 33 as core 11 on socket 0 00:04:05.191 EAL: Detected lcore 34 as core 12 on socket 0 00:04:05.191 EAL: Detected lcore 35 as core 13 on socket 0 00:04:05.191 EAL: Detected lcore 36 as core 0 on socket 1 00:04:05.191 EAL: Detected lcore 37 as core 1 on socket 1 00:04:05.191 EAL: Detected lcore 38 as core 2 on socket 1 00:04:05.191 EAL: Detected lcore 39 as core 3 on socket 1 00:04:05.191 EAL: Detected lcore 40 as core 4 on socket 1 00:04:05.191 EAL: Detected lcore 41 as core 5 on socket 1 00:04:05.191 EAL: Detected lcore 42 as core 8 on socket 1 00:04:05.191 EAL: Detected lcore 43 as core 9 on socket 1 00:04:05.191 EAL: Detected lcore 44 as core 10 on socket 1 00:04:05.191 EAL: Detected lcore 45 as core 11 on socket 1 00:04:05.191 EAL: Detected lcore 46 as core 12 on socket 1 00:04:05.191 EAL: Detected lcore 47 as core 13 on socket 1 00:04:05.191 EAL: Maximum logical cores by configuration: 128 00:04:05.191 EAL: Detected CPU lcores: 48 00:04:05.191 EAL: Detected NUMA nodes: 2 00:04:05.191 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:05.191 EAL: Detected shared linkage of DPDK 00:04:05.191 EAL: No shared files mode enabled, IPC will be disabled 00:04:05.191 EAL: Bus pci wants IOVA as 'DC' 00:04:05.191 EAL: Buses did not request a specific IOVA mode. 00:04:05.191 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:05.191 EAL: Selected IOVA mode 'VA' 00:04:05.191 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.191 EAL: Probing VFIO support... 00:04:05.191 EAL: IOMMU type 1 (Type 1) is supported 00:04:05.191 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:05.191 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:05.191 EAL: VFIO support initialized 00:04:05.191 EAL: Ask a virtual area of 0x2e000 bytes 00:04:05.191 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:05.191 EAL: Setting up physically contiguous memory... 00:04:05.191 EAL: Setting maximum number of open files to 524288 00:04:05.191 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:05.191 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:05.191 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:05.191 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.191 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:05.191 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.191 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.191 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:05.191 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:05.191 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.191 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:05.191 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.191 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.191 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:05.191 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:05.191 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.191 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:05.191 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.191 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.191 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:05.191 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:05.191 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.191 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:05.191 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.191 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.191 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:05.191 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:05.191 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:05.191 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.191 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:05.191 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:05.191 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.191 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:05.191 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:05.191 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.191 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:05.191 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:05.191 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.191 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:05.191 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:05.191 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.191 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:05.191 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:05.191 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.191 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:05.191 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:05.191 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.191 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:05.191 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:05.191 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.191 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:05.191 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:05.191 EAL: Hugepages will be freed exactly as allocated. 00:04:05.191 EAL: No shared files mode enabled, IPC is disabled 00:04:05.191 EAL: No shared files mode enabled, IPC is disabled 00:04:05.191 EAL: TSC frequency is ~2700000 KHz 00:04:05.191 EAL: Main lcore 0 is ready (tid=7f807d6faa40;cpuset=[0]) 00:04:05.191 EAL: Trying to obtain current memory policy. 00:04:05.191 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.191 EAL: Restoring previous memory policy: 0 00:04:05.191 EAL: request: mp_malloc_sync 00:04:05.191 EAL: No shared files mode enabled, IPC is disabled 00:04:05.192 EAL: Heap on socket 0 was expanded by 2MB 00:04:05.192 EAL: No shared files mode enabled, IPC is disabled 00:04:05.192 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:05.192 EAL: Mem event callback 'spdk:(nil)' registered 00:04:05.192 00:04:05.192 00:04:05.192 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.192 http://cunit.sourceforge.net/ 00:04:05.192 00:04:05.192 00:04:05.192 Suite: components_suite 00:04:05.758 Test: vtophys_malloc_test ...passed 00:04:05.758 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:05.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.758 EAL: Restoring previous memory policy: 4 00:04:05.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.758 EAL: request: mp_malloc_sync 00:04:05.758 EAL: No shared files mode enabled, IPC is disabled 00:04:05.758 EAL: Heap on socket 0 was expanded by 4MB 00:04:05.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.758 EAL: request: mp_malloc_sync 00:04:05.758 EAL: No shared files mode enabled, IPC is disabled 00:04:05.758 EAL: Heap on socket 0 was shrunk by 4MB 00:04:05.758 EAL: Trying to obtain current memory policy. 00:04:05.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.758 EAL: Restoring previous memory policy: 4 00:04:05.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.759 EAL: request: mp_malloc_sync 00:04:05.759 EAL: No shared files mode enabled, IPC is disabled 00:04:05.759 EAL: Heap on socket 0 was expanded by 6MB 00:04:05.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.759 EAL: request: mp_malloc_sync 00:04:05.759 EAL: No shared files mode enabled, IPC is disabled 00:04:05.759 EAL: Heap on socket 0 was shrunk by 6MB 00:04:05.759 EAL: Trying to obtain current memory policy. 00:04:05.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.759 EAL: Restoring previous memory policy: 4 00:04:05.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.759 EAL: request: mp_malloc_sync 00:04:05.759 EAL: No shared files mode enabled, IPC is disabled 00:04:05.759 EAL: Heap on socket 0 was expanded by 10MB 00:04:05.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.759 EAL: request: mp_malloc_sync 00:04:05.759 EAL: No shared files mode enabled, IPC is disabled 00:04:05.759 EAL: Heap on socket 0 was shrunk by 10MB 00:04:05.759 EAL: Trying to obtain current memory policy. 00:04:05.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.759 EAL: Restoring previous memory policy: 4 00:04:05.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.759 EAL: request: mp_malloc_sync 00:04:05.759 EAL: No shared files mode enabled, IPC is disabled 00:04:05.759 EAL: Heap on socket 0 was expanded by 18MB 00:04:05.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.759 EAL: request: mp_malloc_sync 00:04:05.759 EAL: No shared files mode enabled, IPC is disabled 00:04:05.759 EAL: Heap on socket 0 was shrunk by 18MB 00:04:05.759 EAL: Trying to obtain current memory policy. 00:04:05.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.759 EAL: Restoring previous memory policy: 4 00:04:05.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.759 EAL: request: mp_malloc_sync 00:04:05.759 EAL: No shared files mode enabled, IPC is disabled 00:04:05.759 EAL: Heap on socket 0 was expanded by 34MB 00:04:05.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.759 EAL: request: mp_malloc_sync 00:04:05.759 EAL: No shared files mode enabled, IPC is disabled 00:04:05.759 EAL: Heap on socket 0 was shrunk by 34MB 00:04:06.017 EAL: Trying to obtain current memory policy. 00:04:06.017 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.017 EAL: Restoring previous memory policy: 4 00:04:06.017 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.017 EAL: request: mp_malloc_sync 00:04:06.017 EAL: No shared files mode enabled, IPC is disabled 00:04:06.017 EAL: Heap on socket 0 was expanded by 66MB 00:04:06.017 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.017 EAL: request: mp_malloc_sync 00:04:06.017 EAL: No shared files mode enabled, IPC is disabled 00:04:06.017 EAL: Heap on socket 0 was shrunk by 66MB 00:04:06.273 EAL: Trying to obtain current memory policy. 00:04:06.273 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.273 EAL: Restoring previous memory policy: 4 00:04:06.273 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.273 EAL: request: mp_malloc_sync 00:04:06.273 EAL: No shared files mode enabled, IPC is disabled 00:04:06.273 EAL: Heap on socket 0 was expanded by 130MB 00:04:06.530 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.530 EAL: request: mp_malloc_sync 00:04:06.530 EAL: No shared files mode enabled, IPC is disabled 00:04:06.530 EAL: Heap on socket 0 was shrunk by 130MB 00:04:06.787 EAL: Trying to obtain current memory policy. 00:04:06.787 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.787 EAL: Restoring previous memory policy: 4 00:04:06.787 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.787 EAL: request: mp_malloc_sync 00:04:06.787 EAL: No shared files mode enabled, IPC is disabled 00:04:06.787 EAL: Heap on socket 0 was expanded by 258MB 00:04:07.352 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.352 EAL: request: mp_malloc_sync 00:04:07.352 EAL: No shared files mode enabled, IPC is disabled 00:04:07.352 EAL: Heap on socket 0 was shrunk by 258MB 00:04:07.610 EAL: Trying to obtain current memory policy. 00:04:07.610 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.867 EAL: Restoring previous memory policy: 4 00:04:07.867 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.867 EAL: request: mp_malloc_sync 00:04:07.867 EAL: No shared files mode enabled, IPC is disabled 00:04:07.867 EAL: Heap on socket 0 was expanded by 514MB 00:04:08.799 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.799 EAL: request: mp_malloc_sync 00:04:08.799 EAL: No shared files mode enabled, IPC is disabled 00:04:08.799 EAL: Heap on socket 0 was shrunk by 514MB 00:04:09.732 EAL: Trying to obtain current memory policy. 00:04:09.732 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.989 EAL: Restoring previous memory policy: 4 00:04:09.989 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.989 EAL: request: mp_malloc_sync 00:04:09.989 EAL: No shared files mode enabled, IPC is disabled 00:04:09.989 EAL: Heap on socket 0 was expanded by 1026MB 00:04:11.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.143 EAL: request: mp_malloc_sync 00:04:12.143 EAL: No shared files mode enabled, IPC is disabled 00:04:12.143 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:14.041 passed 00:04:14.041 00:04:14.041 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.041 suites 1 1 n/a 0 0 00:04:14.041 tests 2 2 2 0 0 00:04:14.041 asserts 497 497 497 0 n/a 00:04:14.041 00:04:14.041 Elapsed time = 8.330 seconds 00:04:14.041 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.041 EAL: request: mp_malloc_sync 00:04:14.041 EAL: No shared files mode enabled, IPC is disabled 00:04:14.041 EAL: Heap on socket 0 was shrunk by 2MB 00:04:14.041 EAL: No shared files mode enabled, IPC is disabled 00:04:14.041 EAL: No shared files mode enabled, IPC is disabled 00:04:14.041 EAL: No shared files mode enabled, IPC is disabled 00:04:14.041 00:04:14.041 real 0m8.593s 00:04:14.041 user 0m7.468s 00:04:14.041 sys 0m1.064s 00:04:14.041 14:06:23 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.041 14:06:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:14.041 ************************************ 00:04:14.041 END TEST env_vtophys 00:04:14.041 ************************************ 00:04:14.041 14:06:23 env -- common/autotest_common.sh@1142 -- # return 0 00:04:14.041 14:06:23 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:14.041 14:06:23 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.041 14:06:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.042 14:06:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.042 ************************************ 00:04:14.042 START TEST env_pci 00:04:14.042 ************************************ 00:04:14.042 14:06:23 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:14.042 00:04:14.042 00:04:14.042 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.042 http://cunit.sourceforge.net/ 00:04:14.042 00:04:14.042 00:04:14.042 Suite: pci 00:04:14.042 Test: pci_hook ...[2024-07-10 14:06:23.133937] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1236976 has claimed it 00:04:14.042 EAL: Cannot find device (10000:00:01.0) 00:04:14.042 EAL: Failed to attach device on primary process 00:04:14.042 passed 00:04:14.042 00:04:14.042 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.042 suites 1 1 n/a 0 0 00:04:14.042 tests 1 1 1 0 0 00:04:14.042 asserts 25 25 25 0 n/a 00:04:14.042 00:04:14.042 Elapsed time = 0.041 seconds 00:04:14.042 00:04:14.042 real 0m0.091s 00:04:14.042 user 0m0.035s 00:04:14.042 sys 0m0.056s 00:04:14.042 14:06:23 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.042 14:06:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:14.042 ************************************ 00:04:14.042 END TEST env_pci 00:04:14.042 ************************************ 00:04:14.042 14:06:23 env -- common/autotest_common.sh@1142 -- # return 0 00:04:14.042 14:06:23 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:14.042 14:06:23 env -- env/env.sh@15 -- # uname 00:04:14.042 14:06:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:14.042 14:06:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:14.042 14:06:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:14.042 14:06:23 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:14.042 14:06:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.042 14:06:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.042 ************************************ 00:04:14.042 START TEST env_dpdk_post_init 00:04:14.042 ************************************ 00:04:14.042 14:06:23 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:14.042 EAL: Detected CPU lcores: 48 00:04:14.042 EAL: Detected NUMA nodes: 2 00:04:14.042 EAL: Detected shared linkage of DPDK 00:04:14.042 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:14.042 EAL: Selected IOVA mode 'VA' 00:04:14.042 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.042 EAL: VFIO support initialized 00:04:14.042 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:14.042 EAL: Using IOMMU type 1 (Type 1) 00:04:14.042 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:14.042 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:14.042 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:14.300 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:14.300 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:14.300 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:14.300 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:14.300 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:14.300 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:14.300 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:14.300 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:14.300 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:14.300 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:14.300 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:14.300 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:14.300 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:15.238 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:18.625 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:18.625 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:18.625 Starting DPDK initialization... 00:04:18.625 Starting SPDK post initialization... 00:04:18.625 SPDK NVMe probe 00:04:18.625 Attaching to 0000:88:00.0 00:04:18.625 Attached to 0000:88:00.0 00:04:18.625 Cleaning up... 00:04:18.625 00:04:18.625 real 0m4.552s 00:04:18.625 user 0m3.350s 00:04:18.625 sys 0m0.256s 00:04:18.625 14:06:27 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.625 14:06:27 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:18.625 ************************************ 00:04:18.625 END TEST env_dpdk_post_init 00:04:18.625 ************************************ 00:04:18.625 14:06:27 env -- common/autotest_common.sh@1142 -- # return 0 00:04:18.625 14:06:27 env -- env/env.sh@26 -- # uname 00:04:18.625 14:06:27 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:18.625 14:06:27 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:18.625 14:06:27 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.625 14:06:27 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.625 14:06:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.625 ************************************ 00:04:18.625 START TEST env_mem_callbacks 00:04:18.625 ************************************ 00:04:18.625 14:06:27 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:18.625 EAL: Detected CPU lcores: 48 00:04:18.625 EAL: Detected NUMA nodes: 2 00:04:18.625 EAL: Detected shared linkage of DPDK 00:04:18.625 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:18.625 EAL: Selected IOVA mode 'VA' 00:04:18.625 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.625 EAL: VFIO support initialized 00:04:18.625 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:18.625 00:04:18.625 00:04:18.625 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.625 http://cunit.sourceforge.net/ 00:04:18.625 00:04:18.625 00:04:18.625 Suite: memory 00:04:18.625 Test: test ... 00:04:18.625 register 0x200000200000 2097152 00:04:18.625 malloc 3145728 00:04:18.625 register 0x200000400000 4194304 00:04:18.625 buf 0x2000004fffc0 len 3145728 PASSED 00:04:18.625 malloc 64 00:04:18.625 buf 0x2000004ffec0 len 64 PASSED 00:04:18.625 malloc 4194304 00:04:18.625 register 0x200000800000 6291456 00:04:18.625 buf 0x2000009fffc0 len 4194304 PASSED 00:04:18.625 free 0x2000004fffc0 3145728 00:04:18.625 free 0x2000004ffec0 64 00:04:18.625 unregister 0x200000400000 4194304 PASSED 00:04:18.625 free 0x2000009fffc0 4194304 00:04:18.625 unregister 0x200000800000 6291456 PASSED 00:04:18.625 malloc 8388608 00:04:18.625 register 0x200000400000 10485760 00:04:18.625 buf 0x2000005fffc0 len 8388608 PASSED 00:04:18.625 free 0x2000005fffc0 8388608 00:04:18.625 unregister 0x200000400000 10485760 PASSED 00:04:18.625 passed 00:04:18.625 00:04:18.625 Run Summary: Type Total Ran Passed Failed Inactive 00:04:18.625 suites 1 1 n/a 0 0 00:04:18.625 tests 1 1 1 0 0 00:04:18.625 asserts 15 15 15 0 n/a 00:04:18.625 00:04:18.625 Elapsed time = 0.060 seconds 00:04:18.625 00:04:18.625 real 0m0.179s 00:04:18.625 user 0m0.100s 00:04:18.626 sys 0m0.078s 00:04:18.626 14:06:28 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.626 14:06:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:18.626 ************************************ 00:04:18.626 END TEST env_mem_callbacks 00:04:18.626 ************************************ 00:04:18.626 14:06:28 env -- common/autotest_common.sh@1142 -- # return 0 00:04:18.626 00:04:18.626 real 0m13.970s 00:04:18.626 user 0m11.321s 00:04:18.626 sys 0m1.660s 00:04:18.626 14:06:28 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.626 14:06:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.626 ************************************ 00:04:18.626 END TEST env 00:04:18.626 ************************************ 00:04:18.626 14:06:28 -- common/autotest_common.sh@1142 -- # return 0 00:04:18.626 14:06:28 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:18.626 14:06:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.626 14:06:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.626 14:06:28 -- common/autotest_common.sh@10 -- # set +x 00:04:18.626 ************************************ 00:04:18.626 START TEST rpc 00:04:18.626 ************************************ 00:04:18.626 14:06:28 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:18.889 * Looking for test storage... 00:04:18.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:18.889 14:06:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1237765 00:04:18.889 14:06:28 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:18.889 14:06:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.889 14:06:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1237765 00:04:18.889 14:06:28 rpc -- common/autotest_common.sh@829 -- # '[' -z 1237765 ']' 00:04:18.889 14:06:28 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.889 14:06:28 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:18.889 14:06:28 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.889 14:06:28 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:18.889 14:06:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.889 [2024-07-10 14:06:28.237009] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:04:18.889 [2024-07-10 14:06:28.237166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1237765 ] 00:04:18.889 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.889 [2024-07-10 14:06:28.365472] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.147 [2024-07-10 14:06:28.623287] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:19.147 [2024-07-10 14:06:28.623374] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1237765' to capture a snapshot of events at runtime. 00:04:19.147 [2024-07-10 14:06:28.623399] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:19.147 [2024-07-10 14:06:28.623443] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:19.147 [2024-07-10 14:06:28.623468] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1237765 for offline analysis/debug. 00:04:19.147 [2024-07-10 14:06:28.623531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.081 14:06:29 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:20.081 14:06:29 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:20.081 14:06:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:20.081 14:06:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:20.081 14:06:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:20.081 14:06:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:20.081 14:06:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.081 14:06:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.081 14:06:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.081 ************************************ 00:04:20.081 START TEST rpc_integrity 00:04:20.081 ************************************ 00:04:20.081 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:20.081 14:06:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:20.081 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.081 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.082 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.082 14:06:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:20.082 14:06:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:20.340 14:06:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:20.340 14:06:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:20.340 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.340 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.340 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.340 14:06:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:20.340 14:06:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:20.340 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.340 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.340 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.340 14:06:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:20.340 { 00:04:20.340 "name": "Malloc0", 00:04:20.340 "aliases": [ 00:04:20.340 "767811cd-1a9a-41f0-a820-39647dd74ccf" 00:04:20.340 ], 00:04:20.340 "product_name": "Malloc disk", 00:04:20.340 "block_size": 512, 00:04:20.340 "num_blocks": 16384, 00:04:20.340 "uuid": "767811cd-1a9a-41f0-a820-39647dd74ccf", 00:04:20.340 "assigned_rate_limits": { 00:04:20.340 "rw_ios_per_sec": 0, 00:04:20.340 "rw_mbytes_per_sec": 0, 00:04:20.340 "r_mbytes_per_sec": 0, 00:04:20.340 "w_mbytes_per_sec": 0 00:04:20.340 }, 00:04:20.340 "claimed": false, 00:04:20.340 "zoned": false, 00:04:20.340 "supported_io_types": { 00:04:20.340 "read": true, 00:04:20.340 "write": true, 00:04:20.340 "unmap": true, 00:04:20.340 "flush": true, 00:04:20.340 "reset": true, 00:04:20.340 "nvme_admin": false, 00:04:20.340 "nvme_io": false, 00:04:20.340 "nvme_io_md": false, 00:04:20.340 "write_zeroes": true, 00:04:20.340 "zcopy": true, 00:04:20.340 "get_zone_info": false, 00:04:20.340 "zone_management": false, 00:04:20.340 "zone_append": false, 00:04:20.340 "compare": false, 00:04:20.340 "compare_and_write": false, 00:04:20.340 "abort": true, 00:04:20.340 "seek_hole": false, 00:04:20.340 "seek_data": false, 00:04:20.340 "copy": true, 00:04:20.340 "nvme_iov_md": false 00:04:20.340 }, 00:04:20.340 "memory_domains": [ 00:04:20.340 { 00:04:20.340 "dma_device_id": "system", 00:04:20.340 "dma_device_type": 1 00:04:20.340 }, 00:04:20.340 { 00:04:20.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.340 "dma_device_type": 2 00:04:20.340 } 00:04:20.340 ], 00:04:20.340 "driver_specific": {} 00:04:20.340 } 00:04:20.340 ]' 00:04:20.340 14:06:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:20.340 14:06:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:20.340 14:06:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:20.340 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.340 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.340 [2024-07-10 14:06:29.663453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:20.340 [2024-07-10 14:06:29.663548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:20.340 [2024-07-10 14:06:29.663591] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:04:20.340 [2024-07-10 14:06:29.663617] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:20.340 [2024-07-10 14:06:29.666338] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:20.340 [2024-07-10 14:06:29.666388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:20.340 Passthru0 00:04:20.340 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.340 14:06:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:20.340 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.340 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.340 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.340 14:06:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:20.340 { 00:04:20.340 "name": "Malloc0", 00:04:20.340 "aliases": [ 00:04:20.340 "767811cd-1a9a-41f0-a820-39647dd74ccf" 00:04:20.340 ], 00:04:20.340 "product_name": "Malloc disk", 00:04:20.340 "block_size": 512, 00:04:20.340 "num_blocks": 16384, 00:04:20.340 "uuid": "767811cd-1a9a-41f0-a820-39647dd74ccf", 00:04:20.340 "assigned_rate_limits": { 00:04:20.340 "rw_ios_per_sec": 0, 00:04:20.340 "rw_mbytes_per_sec": 0, 00:04:20.340 "r_mbytes_per_sec": 0, 00:04:20.340 "w_mbytes_per_sec": 0 00:04:20.340 }, 00:04:20.340 "claimed": true, 00:04:20.340 "claim_type": "exclusive_write", 00:04:20.340 "zoned": false, 00:04:20.340 "supported_io_types": { 00:04:20.340 "read": true, 00:04:20.340 "write": true, 00:04:20.340 "unmap": true, 00:04:20.340 "flush": true, 00:04:20.340 "reset": true, 00:04:20.340 "nvme_admin": false, 00:04:20.340 "nvme_io": false, 00:04:20.340 "nvme_io_md": false, 00:04:20.340 "write_zeroes": true, 00:04:20.340 "zcopy": true, 00:04:20.340 "get_zone_info": false, 00:04:20.340 "zone_management": false, 00:04:20.340 "zone_append": false, 00:04:20.340 "compare": false, 00:04:20.340 "compare_and_write": false, 00:04:20.340 "abort": true, 00:04:20.340 "seek_hole": false, 00:04:20.340 "seek_data": false, 00:04:20.340 "copy": true, 00:04:20.340 "nvme_iov_md": false 00:04:20.340 }, 00:04:20.340 "memory_domains": [ 00:04:20.340 { 00:04:20.340 "dma_device_id": "system", 00:04:20.340 "dma_device_type": 1 00:04:20.340 }, 00:04:20.340 { 00:04:20.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.340 "dma_device_type": 2 00:04:20.340 } 00:04:20.340 ], 00:04:20.340 "driver_specific": {} 00:04:20.340 }, 00:04:20.340 { 00:04:20.340 "name": "Passthru0", 00:04:20.340 "aliases": [ 00:04:20.340 "e1d1266e-64ee-56e6-82d9-02dfa89dc56e" 00:04:20.340 ], 00:04:20.340 "product_name": "passthru", 00:04:20.340 "block_size": 512, 00:04:20.340 "num_blocks": 16384, 00:04:20.340 "uuid": "e1d1266e-64ee-56e6-82d9-02dfa89dc56e", 00:04:20.340 "assigned_rate_limits": { 00:04:20.340 "rw_ios_per_sec": 0, 00:04:20.340 "rw_mbytes_per_sec": 0, 00:04:20.340 "r_mbytes_per_sec": 0, 00:04:20.340 "w_mbytes_per_sec": 0 00:04:20.340 }, 00:04:20.340 "claimed": false, 00:04:20.340 "zoned": false, 00:04:20.340 "supported_io_types": { 00:04:20.340 "read": true, 00:04:20.340 "write": true, 00:04:20.340 "unmap": true, 00:04:20.340 "flush": true, 00:04:20.340 "reset": true, 00:04:20.340 "nvme_admin": false, 00:04:20.340 "nvme_io": false, 00:04:20.340 "nvme_io_md": false, 00:04:20.340 "write_zeroes": true, 00:04:20.340 "zcopy": true, 00:04:20.340 "get_zone_info": false, 00:04:20.340 "zone_management": false, 00:04:20.340 "zone_append": false, 00:04:20.340 "compare": false, 00:04:20.340 "compare_and_write": false, 00:04:20.340 "abort": true, 00:04:20.340 "seek_hole": false, 00:04:20.340 "seek_data": false, 00:04:20.340 "copy": true, 00:04:20.340 "nvme_iov_md": false 00:04:20.340 }, 00:04:20.340 "memory_domains": [ 00:04:20.340 { 00:04:20.340 "dma_device_id": "system", 00:04:20.340 "dma_device_type": 1 00:04:20.340 }, 00:04:20.340 { 00:04:20.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.341 "dma_device_type": 2 00:04:20.341 } 00:04:20.341 ], 00:04:20.341 "driver_specific": { 00:04:20.341 "passthru": { 00:04:20.341 "name": "Passthru0", 00:04:20.341 "base_bdev_name": "Malloc0" 00:04:20.341 } 00:04:20.341 } 00:04:20.341 } 00:04:20.341 ]' 00:04:20.341 14:06:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:20.341 14:06:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:20.341 14:06:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:20.341 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.341 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.341 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.341 14:06:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:20.341 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.341 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.341 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.341 14:06:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:20.341 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.341 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.341 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.341 14:06:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:20.341 14:06:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:20.341 14:06:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:20.341 00:04:20.341 real 0m0.263s 00:04:20.341 user 0m0.155s 00:04:20.341 sys 0m0.022s 00:04:20.341 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.341 14:06:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.341 ************************************ 00:04:20.341 END TEST rpc_integrity 00:04:20.341 ************************************ 00:04:20.599 14:06:29 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:20.599 14:06:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:20.599 14:06:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.599 14:06:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.599 14:06:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.599 ************************************ 00:04:20.599 START TEST rpc_plugins 00:04:20.599 ************************************ 00:04:20.599 14:06:29 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:20.599 14:06:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:20.599 14:06:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.599 14:06:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:20.599 14:06:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.599 14:06:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:20.599 14:06:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:20.599 14:06:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.599 14:06:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:20.599 14:06:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.599 14:06:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:20.599 { 00:04:20.599 "name": "Malloc1", 00:04:20.599 "aliases": [ 00:04:20.599 "e0fd1458-5ff9-4c04-a04b-906227fa9522" 00:04:20.599 ], 00:04:20.599 "product_name": "Malloc disk", 00:04:20.599 "block_size": 4096, 00:04:20.599 "num_blocks": 256, 00:04:20.599 "uuid": "e0fd1458-5ff9-4c04-a04b-906227fa9522", 00:04:20.599 "assigned_rate_limits": { 00:04:20.599 "rw_ios_per_sec": 0, 00:04:20.599 "rw_mbytes_per_sec": 0, 00:04:20.599 "r_mbytes_per_sec": 0, 00:04:20.599 "w_mbytes_per_sec": 0 00:04:20.599 }, 00:04:20.599 "claimed": false, 00:04:20.599 "zoned": false, 00:04:20.599 "supported_io_types": { 00:04:20.599 "read": true, 00:04:20.599 "write": true, 00:04:20.599 "unmap": true, 00:04:20.599 "flush": true, 00:04:20.599 "reset": true, 00:04:20.599 "nvme_admin": false, 00:04:20.599 "nvme_io": false, 00:04:20.599 "nvme_io_md": false, 00:04:20.599 "write_zeroes": true, 00:04:20.599 "zcopy": true, 00:04:20.599 "get_zone_info": false, 00:04:20.599 "zone_management": false, 00:04:20.599 "zone_append": false, 00:04:20.599 "compare": false, 00:04:20.599 "compare_and_write": false, 00:04:20.599 "abort": true, 00:04:20.599 "seek_hole": false, 00:04:20.599 "seek_data": false, 00:04:20.599 "copy": true, 00:04:20.599 "nvme_iov_md": false 00:04:20.599 }, 00:04:20.599 "memory_domains": [ 00:04:20.599 { 00:04:20.599 "dma_device_id": "system", 00:04:20.599 "dma_device_type": 1 00:04:20.599 }, 00:04:20.599 { 00:04:20.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.599 "dma_device_type": 2 00:04:20.599 } 00:04:20.599 ], 00:04:20.599 "driver_specific": {} 00:04:20.599 } 00:04:20.599 ]' 00:04:20.599 14:06:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:20.599 14:06:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:20.599 14:06:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:20.599 14:06:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.599 14:06:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:20.599 14:06:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.599 14:06:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:20.599 14:06:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.599 14:06:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:20.599 14:06:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.599 14:06:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:20.599 14:06:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:20.599 14:06:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:20.599 00:04:20.599 real 0m0.117s 00:04:20.599 user 0m0.074s 00:04:20.599 sys 0m0.011s 00:04:20.599 14:06:29 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.599 14:06:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:20.599 ************************************ 00:04:20.599 END TEST rpc_plugins 00:04:20.599 ************************************ 00:04:20.599 14:06:29 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:20.599 14:06:29 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:20.599 14:06:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.599 14:06:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.599 14:06:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.599 ************************************ 00:04:20.599 START TEST rpc_trace_cmd_test 00:04:20.599 ************************************ 00:04:20.599 14:06:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:20.599 14:06:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:20.599 14:06:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:20.599 14:06:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.599 14:06:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:20.599 14:06:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.599 14:06:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:20.599 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1237765", 00:04:20.599 "tpoint_group_mask": "0x8", 00:04:20.599 "iscsi_conn": { 00:04:20.599 "mask": "0x2", 00:04:20.599 "tpoint_mask": "0x0" 00:04:20.599 }, 00:04:20.599 "scsi": { 00:04:20.599 "mask": "0x4", 00:04:20.599 "tpoint_mask": "0x0" 00:04:20.599 }, 00:04:20.599 "bdev": { 00:04:20.599 "mask": "0x8", 00:04:20.599 "tpoint_mask": "0xffffffffffffffff" 00:04:20.599 }, 00:04:20.599 "nvmf_rdma": { 00:04:20.599 "mask": "0x10", 00:04:20.599 "tpoint_mask": "0x0" 00:04:20.599 }, 00:04:20.599 "nvmf_tcp": { 00:04:20.599 "mask": "0x20", 00:04:20.599 "tpoint_mask": "0x0" 00:04:20.599 }, 00:04:20.599 "ftl": { 00:04:20.599 "mask": "0x40", 00:04:20.599 "tpoint_mask": "0x0" 00:04:20.600 }, 00:04:20.600 "blobfs": { 00:04:20.600 "mask": "0x80", 00:04:20.600 "tpoint_mask": "0x0" 00:04:20.600 }, 00:04:20.600 "dsa": { 00:04:20.600 "mask": "0x200", 00:04:20.600 "tpoint_mask": "0x0" 00:04:20.600 }, 00:04:20.600 "thread": { 00:04:20.600 "mask": "0x400", 00:04:20.600 "tpoint_mask": "0x0" 00:04:20.600 }, 00:04:20.600 "nvme_pcie": { 00:04:20.600 "mask": "0x800", 00:04:20.600 "tpoint_mask": "0x0" 00:04:20.600 }, 00:04:20.600 "iaa": { 00:04:20.600 "mask": "0x1000", 00:04:20.600 "tpoint_mask": "0x0" 00:04:20.600 }, 00:04:20.600 "nvme_tcp": { 00:04:20.600 "mask": "0x2000", 00:04:20.600 "tpoint_mask": "0x0" 00:04:20.600 }, 00:04:20.600 "bdev_nvme": { 00:04:20.600 "mask": "0x4000", 00:04:20.600 "tpoint_mask": "0x0" 00:04:20.600 }, 00:04:20.600 "sock": { 00:04:20.600 "mask": "0x8000", 00:04:20.600 "tpoint_mask": "0x0" 00:04:20.600 } 00:04:20.600 }' 00:04:20.600 14:06:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:20.600 14:06:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:20.600 14:06:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:20.858 14:06:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:20.858 14:06:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:20.858 14:06:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:20.858 14:06:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:20.858 14:06:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:20.858 14:06:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:20.858 14:06:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:20.858 00:04:20.858 real 0m0.192s 00:04:20.858 user 0m0.167s 00:04:20.858 sys 0m0.018s 00:04:20.858 14:06:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.858 14:06:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:20.858 ************************************ 00:04:20.858 END TEST rpc_trace_cmd_test 00:04:20.858 ************************************ 00:04:20.858 14:06:30 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:20.858 14:06:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:20.858 14:06:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:20.858 14:06:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:20.858 14:06:30 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.858 14:06:30 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.858 14:06:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.858 ************************************ 00:04:20.858 START TEST rpc_daemon_integrity 00:04:20.858 ************************************ 00:04:20.858 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:20.858 14:06:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:20.858 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.858 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.858 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.858 14:06:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:20.858 14:06:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:20.858 14:06:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:20.858 14:06:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:20.858 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.858 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.858 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.858 14:06:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:20.858 14:06:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:20.858 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.858 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.117 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.117 14:06:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:21.117 { 00:04:21.117 "name": "Malloc2", 00:04:21.117 "aliases": [ 00:04:21.117 "c33bfa02-25bb-448f-9354-9e66ed4075d5" 00:04:21.117 ], 00:04:21.117 "product_name": "Malloc disk", 00:04:21.117 "block_size": 512, 00:04:21.117 "num_blocks": 16384, 00:04:21.117 "uuid": "c33bfa02-25bb-448f-9354-9e66ed4075d5", 00:04:21.117 "assigned_rate_limits": { 00:04:21.117 "rw_ios_per_sec": 0, 00:04:21.117 "rw_mbytes_per_sec": 0, 00:04:21.117 "r_mbytes_per_sec": 0, 00:04:21.117 "w_mbytes_per_sec": 0 00:04:21.117 }, 00:04:21.117 "claimed": false, 00:04:21.117 "zoned": false, 00:04:21.117 "supported_io_types": { 00:04:21.117 "read": true, 00:04:21.117 "write": true, 00:04:21.117 "unmap": true, 00:04:21.117 "flush": true, 00:04:21.117 "reset": true, 00:04:21.117 "nvme_admin": false, 00:04:21.117 "nvme_io": false, 00:04:21.117 "nvme_io_md": false, 00:04:21.117 "write_zeroes": true, 00:04:21.117 "zcopy": true, 00:04:21.117 "get_zone_info": false, 00:04:21.117 "zone_management": false, 00:04:21.117 "zone_append": false, 00:04:21.117 "compare": false, 00:04:21.117 "compare_and_write": false, 00:04:21.117 "abort": true, 00:04:21.117 "seek_hole": false, 00:04:21.117 "seek_data": false, 00:04:21.117 "copy": true, 00:04:21.117 "nvme_iov_md": false 00:04:21.117 }, 00:04:21.117 "memory_domains": [ 00:04:21.117 { 00:04:21.117 "dma_device_id": "system", 00:04:21.117 "dma_device_type": 1 00:04:21.117 }, 00:04:21.117 { 00:04:21.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.117 "dma_device_type": 2 00:04:21.117 } 00:04:21.117 ], 00:04:21.117 "driver_specific": {} 00:04:21.117 } 00:04:21.117 ]' 00:04:21.117 14:06:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:21.117 14:06:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:21.117 14:06:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:21.117 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.117 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.118 [2024-07-10 14:06:30.381216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:21.118 [2024-07-10 14:06:30.381294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:21.118 [2024-07-10 14:06:30.381336] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023a80 00:04:21.118 [2024-07-10 14:06:30.381365] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:21.118 [2024-07-10 14:06:30.384097] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:21.118 [2024-07-10 14:06:30.384142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:21.118 Passthru0 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:21.118 { 00:04:21.118 "name": "Malloc2", 00:04:21.118 "aliases": [ 00:04:21.118 "c33bfa02-25bb-448f-9354-9e66ed4075d5" 00:04:21.118 ], 00:04:21.118 "product_name": "Malloc disk", 00:04:21.118 "block_size": 512, 00:04:21.118 "num_blocks": 16384, 00:04:21.118 "uuid": "c33bfa02-25bb-448f-9354-9e66ed4075d5", 00:04:21.118 "assigned_rate_limits": { 00:04:21.118 "rw_ios_per_sec": 0, 00:04:21.118 "rw_mbytes_per_sec": 0, 00:04:21.118 "r_mbytes_per_sec": 0, 00:04:21.118 "w_mbytes_per_sec": 0 00:04:21.118 }, 00:04:21.118 "claimed": true, 00:04:21.118 "claim_type": "exclusive_write", 00:04:21.118 "zoned": false, 00:04:21.118 "supported_io_types": { 00:04:21.118 "read": true, 00:04:21.118 "write": true, 00:04:21.118 "unmap": true, 00:04:21.118 "flush": true, 00:04:21.118 "reset": true, 00:04:21.118 "nvme_admin": false, 00:04:21.118 "nvme_io": false, 00:04:21.118 "nvme_io_md": false, 00:04:21.118 "write_zeroes": true, 00:04:21.118 "zcopy": true, 00:04:21.118 "get_zone_info": false, 00:04:21.118 "zone_management": false, 00:04:21.118 "zone_append": false, 00:04:21.118 "compare": false, 00:04:21.118 "compare_and_write": false, 00:04:21.118 "abort": true, 00:04:21.118 "seek_hole": false, 00:04:21.118 "seek_data": false, 00:04:21.118 "copy": true, 00:04:21.118 "nvme_iov_md": false 00:04:21.118 }, 00:04:21.118 "memory_domains": [ 00:04:21.118 { 00:04:21.118 "dma_device_id": "system", 00:04:21.118 "dma_device_type": 1 00:04:21.118 }, 00:04:21.118 { 00:04:21.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.118 "dma_device_type": 2 00:04:21.118 } 00:04:21.118 ], 00:04:21.118 "driver_specific": {} 00:04:21.118 }, 00:04:21.118 { 00:04:21.118 "name": "Passthru0", 00:04:21.118 "aliases": [ 00:04:21.118 "668fdf48-8f7f-5e7d-ada0-4fb5293d004f" 00:04:21.118 ], 00:04:21.118 "product_name": "passthru", 00:04:21.118 "block_size": 512, 00:04:21.118 "num_blocks": 16384, 00:04:21.118 "uuid": "668fdf48-8f7f-5e7d-ada0-4fb5293d004f", 00:04:21.118 "assigned_rate_limits": { 00:04:21.118 "rw_ios_per_sec": 0, 00:04:21.118 "rw_mbytes_per_sec": 0, 00:04:21.118 "r_mbytes_per_sec": 0, 00:04:21.118 "w_mbytes_per_sec": 0 00:04:21.118 }, 00:04:21.118 "claimed": false, 00:04:21.118 "zoned": false, 00:04:21.118 "supported_io_types": { 00:04:21.118 "read": true, 00:04:21.118 "write": true, 00:04:21.118 "unmap": true, 00:04:21.118 "flush": true, 00:04:21.118 "reset": true, 00:04:21.118 "nvme_admin": false, 00:04:21.118 "nvme_io": false, 00:04:21.118 "nvme_io_md": false, 00:04:21.118 "write_zeroes": true, 00:04:21.118 "zcopy": true, 00:04:21.118 "get_zone_info": false, 00:04:21.118 "zone_management": false, 00:04:21.118 "zone_append": false, 00:04:21.118 "compare": false, 00:04:21.118 "compare_and_write": false, 00:04:21.118 "abort": true, 00:04:21.118 "seek_hole": false, 00:04:21.118 "seek_data": false, 00:04:21.118 "copy": true, 00:04:21.118 "nvme_iov_md": false 00:04:21.118 }, 00:04:21.118 "memory_domains": [ 00:04:21.118 { 00:04:21.118 "dma_device_id": "system", 00:04:21.118 "dma_device_type": 1 00:04:21.118 }, 00:04:21.118 { 00:04:21.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.118 "dma_device_type": 2 00:04:21.118 } 00:04:21.118 ], 00:04:21.118 "driver_specific": { 00:04:21.118 "passthru": { 00:04:21.118 "name": "Passthru0", 00:04:21.118 "base_bdev_name": "Malloc2" 00:04:21.118 } 00:04:21.118 } 00:04:21.118 } 00:04:21.118 ]' 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:21.118 00:04:21.118 real 0m0.261s 00:04:21.118 user 0m0.147s 00:04:21.118 sys 0m0.026s 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.118 14:06:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.118 ************************************ 00:04:21.118 END TEST rpc_daemon_integrity 00:04:21.118 ************************************ 00:04:21.118 14:06:30 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:21.118 14:06:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:21.118 14:06:30 rpc -- rpc/rpc.sh@84 -- # killprocess 1237765 00:04:21.118 14:06:30 rpc -- common/autotest_common.sh@948 -- # '[' -z 1237765 ']' 00:04:21.118 14:06:30 rpc -- common/autotest_common.sh@952 -- # kill -0 1237765 00:04:21.118 14:06:30 rpc -- common/autotest_common.sh@953 -- # uname 00:04:21.118 14:06:30 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:21.118 14:06:30 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1237765 00:04:21.118 14:06:30 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:21.119 14:06:30 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:21.119 14:06:30 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1237765' 00:04:21.119 killing process with pid 1237765 00:04:21.119 14:06:30 rpc -- common/autotest_common.sh@967 -- # kill 1237765 00:04:21.119 14:06:30 rpc -- common/autotest_common.sh@972 -- # wait 1237765 00:04:23.652 00:04:23.652 real 0m4.990s 00:04:23.652 user 0m5.532s 00:04:23.652 sys 0m0.815s 00:04:23.652 14:06:33 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.652 14:06:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.652 ************************************ 00:04:23.652 END TEST rpc 00:04:23.652 ************************************ 00:04:23.652 14:06:33 -- common/autotest_common.sh@1142 -- # return 0 00:04:23.652 14:06:33 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:23.652 14:06:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.652 14:06:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.652 14:06:33 -- common/autotest_common.sh@10 -- # set +x 00:04:23.915 ************************************ 00:04:23.915 START TEST skip_rpc 00:04:23.915 ************************************ 00:04:23.915 14:06:33 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:23.915 * Looking for test storage... 00:04:23.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:23.915 14:06:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:23.915 14:06:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:23.915 14:06:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:23.915 14:06:33 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.915 14:06:33 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.915 14:06:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.915 ************************************ 00:04:23.915 START TEST skip_rpc 00:04:23.915 ************************************ 00:04:23.915 14:06:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:23.915 14:06:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1238478 00:04:23.915 14:06:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:23.915 14:06:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.915 14:06:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:23.915 [2024-07-10 14:06:33.317197] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:04:23.915 [2024-07-10 14:06:33.317377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1238478 ] 00:04:23.915 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.173 [2024-07-10 14:06:33.447063] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.431 [2024-07-10 14:06:33.704331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1238478 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1238478 ']' 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1238478 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1238478 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1238478' 00:04:29.695 killing process with pid 1238478 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1238478 00:04:29.695 14:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1238478 00:04:31.594 00:04:31.594 real 0m7.533s 00:04:31.594 user 0m7.044s 00:04:31.594 sys 0m0.473s 00:04:31.594 14:06:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.594 14:06:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.594 ************************************ 00:04:31.594 END TEST skip_rpc 00:04:31.594 ************************************ 00:04:31.594 14:06:40 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:31.594 14:06:40 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:31.594 14:06:40 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.594 14:06:40 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.594 14:06:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.594 ************************************ 00:04:31.594 START TEST skip_rpc_with_json 00:04:31.594 ************************************ 00:04:31.594 14:06:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:31.594 14:06:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:31.594 14:06:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1239427 00:04:31.594 14:06:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.594 14:06:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.594 14:06:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1239427 00:04:31.594 14:06:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1239427 ']' 00:04:31.594 14:06:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.594 14:06:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:31.594 14:06:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.594 14:06:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:31.594 14:06:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.594 [2024-07-10 14:06:40.890968] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:04:31.595 [2024-07-10 14:06:40.891124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1239427 ] 00:04:31.595 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.595 [2024-07-10 14:06:41.013252] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.852 [2024-07-10 14:06:41.263444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.788 14:06:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:32.788 14:06:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:32.788 14:06:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:32.788 14:06:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.788 14:06:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.788 [2024-07-10 14:06:42.135451] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:32.788 request: 00:04:32.788 { 00:04:32.788 "trtype": "tcp", 00:04:32.788 "method": "nvmf_get_transports", 00:04:32.788 "req_id": 1 00:04:32.788 } 00:04:32.788 Got JSON-RPC error response 00:04:32.788 response: 00:04:32.788 { 00:04:32.788 "code": -19, 00:04:32.788 "message": "No such device" 00:04:32.788 } 00:04:32.788 14:06:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:32.788 14:06:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:32.788 14:06:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.788 14:06:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.788 [2024-07-10 14:06:42.143595] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:32.788 14:06:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.788 14:06:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:32.788 14:06:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.788 14:06:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.047 14:06:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.047 14:06:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:33.047 { 00:04:33.047 "subsystems": [ 00:04:33.047 { 00:04:33.047 "subsystem": "keyring", 00:04:33.047 "config": [] 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "subsystem": "iobuf", 00:04:33.047 "config": [ 00:04:33.047 { 00:04:33.047 "method": "iobuf_set_options", 00:04:33.047 "params": { 00:04:33.047 "small_pool_count": 8192, 00:04:33.047 "large_pool_count": 1024, 00:04:33.047 "small_bufsize": 8192, 00:04:33.047 "large_bufsize": 135168 00:04:33.047 } 00:04:33.047 } 00:04:33.047 ] 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "subsystem": "sock", 00:04:33.047 "config": [ 00:04:33.047 { 00:04:33.047 "method": "sock_set_default_impl", 00:04:33.047 "params": { 00:04:33.047 "impl_name": "posix" 00:04:33.047 } 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "method": "sock_impl_set_options", 00:04:33.047 "params": { 00:04:33.047 "impl_name": "ssl", 00:04:33.047 "recv_buf_size": 4096, 00:04:33.047 "send_buf_size": 4096, 00:04:33.047 "enable_recv_pipe": true, 00:04:33.047 "enable_quickack": false, 00:04:33.047 "enable_placement_id": 0, 00:04:33.047 "enable_zerocopy_send_server": true, 00:04:33.047 "enable_zerocopy_send_client": false, 00:04:33.047 "zerocopy_threshold": 0, 00:04:33.047 "tls_version": 0, 00:04:33.047 "enable_ktls": false 00:04:33.047 } 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "method": "sock_impl_set_options", 00:04:33.047 "params": { 00:04:33.047 "impl_name": "posix", 00:04:33.047 "recv_buf_size": 2097152, 00:04:33.047 "send_buf_size": 2097152, 00:04:33.047 "enable_recv_pipe": true, 00:04:33.047 "enable_quickack": false, 00:04:33.047 "enable_placement_id": 0, 00:04:33.047 "enable_zerocopy_send_server": true, 00:04:33.047 "enable_zerocopy_send_client": false, 00:04:33.047 "zerocopy_threshold": 0, 00:04:33.047 "tls_version": 0, 00:04:33.047 "enable_ktls": false 00:04:33.047 } 00:04:33.047 } 00:04:33.047 ] 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "subsystem": "vmd", 00:04:33.047 "config": [] 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "subsystem": "accel", 00:04:33.047 "config": [ 00:04:33.047 { 00:04:33.047 "method": "accel_set_options", 00:04:33.047 "params": { 00:04:33.047 "small_cache_size": 128, 00:04:33.047 "large_cache_size": 16, 00:04:33.047 "task_count": 2048, 00:04:33.047 "sequence_count": 2048, 00:04:33.047 "buf_count": 2048 00:04:33.047 } 00:04:33.047 } 00:04:33.047 ] 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "subsystem": "bdev", 00:04:33.047 "config": [ 00:04:33.047 { 00:04:33.047 "method": "bdev_set_options", 00:04:33.047 "params": { 00:04:33.047 "bdev_io_pool_size": 65535, 00:04:33.047 "bdev_io_cache_size": 256, 00:04:33.047 "bdev_auto_examine": true, 00:04:33.047 "iobuf_small_cache_size": 128, 00:04:33.047 "iobuf_large_cache_size": 16 00:04:33.047 } 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "method": "bdev_raid_set_options", 00:04:33.047 "params": { 00:04:33.047 "process_window_size_kb": 1024 00:04:33.047 } 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "method": "bdev_iscsi_set_options", 00:04:33.047 "params": { 00:04:33.047 "timeout_sec": 30 00:04:33.047 } 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "method": "bdev_nvme_set_options", 00:04:33.047 "params": { 00:04:33.047 "action_on_timeout": "none", 00:04:33.047 "timeout_us": 0, 00:04:33.047 "timeout_admin_us": 0, 00:04:33.047 "keep_alive_timeout_ms": 10000, 00:04:33.047 "arbitration_burst": 0, 00:04:33.047 "low_priority_weight": 0, 00:04:33.047 "medium_priority_weight": 0, 00:04:33.047 "high_priority_weight": 0, 00:04:33.047 "nvme_adminq_poll_period_us": 10000, 00:04:33.047 "nvme_ioq_poll_period_us": 0, 00:04:33.047 "io_queue_requests": 0, 00:04:33.047 "delay_cmd_submit": true, 00:04:33.047 "transport_retry_count": 4, 00:04:33.047 "bdev_retry_count": 3, 00:04:33.047 "transport_ack_timeout": 0, 00:04:33.047 "ctrlr_loss_timeout_sec": 0, 00:04:33.047 "reconnect_delay_sec": 0, 00:04:33.047 "fast_io_fail_timeout_sec": 0, 00:04:33.047 "disable_auto_failback": false, 00:04:33.047 "generate_uuids": false, 00:04:33.047 "transport_tos": 0, 00:04:33.047 "nvme_error_stat": false, 00:04:33.047 "rdma_srq_size": 0, 00:04:33.047 "io_path_stat": false, 00:04:33.047 "allow_accel_sequence": false, 00:04:33.047 "rdma_max_cq_size": 0, 00:04:33.047 "rdma_cm_event_timeout_ms": 0, 00:04:33.047 "dhchap_digests": [ 00:04:33.047 "sha256", 00:04:33.047 "sha384", 00:04:33.047 "sha512" 00:04:33.047 ], 00:04:33.047 "dhchap_dhgroups": [ 00:04:33.047 "null", 00:04:33.047 "ffdhe2048", 00:04:33.047 "ffdhe3072", 00:04:33.047 "ffdhe4096", 00:04:33.047 "ffdhe6144", 00:04:33.047 "ffdhe8192" 00:04:33.047 ] 00:04:33.047 } 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "method": "bdev_nvme_set_hotplug", 00:04:33.047 "params": { 00:04:33.047 "period_us": 100000, 00:04:33.047 "enable": false 00:04:33.047 } 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "method": "bdev_wait_for_examine" 00:04:33.047 } 00:04:33.047 ] 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "subsystem": "scsi", 00:04:33.047 "config": null 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "subsystem": "scheduler", 00:04:33.047 "config": [ 00:04:33.047 { 00:04:33.047 "method": "framework_set_scheduler", 00:04:33.047 "params": { 00:04:33.047 "name": "static" 00:04:33.047 } 00:04:33.047 } 00:04:33.047 ] 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "subsystem": "vhost_scsi", 00:04:33.047 "config": [] 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "subsystem": "vhost_blk", 00:04:33.047 "config": [] 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "subsystem": "ublk", 00:04:33.047 "config": [] 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "subsystem": "nbd", 00:04:33.047 "config": [] 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "subsystem": "nvmf", 00:04:33.047 "config": [ 00:04:33.047 { 00:04:33.047 "method": "nvmf_set_config", 00:04:33.047 "params": { 00:04:33.047 "discovery_filter": "match_any", 00:04:33.047 "admin_cmd_passthru": { 00:04:33.047 "identify_ctrlr": false 00:04:33.047 } 00:04:33.047 } 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "method": "nvmf_set_max_subsystems", 00:04:33.047 "params": { 00:04:33.047 "max_subsystems": 1024 00:04:33.047 } 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "method": "nvmf_set_crdt", 00:04:33.047 "params": { 00:04:33.047 "crdt1": 0, 00:04:33.047 "crdt2": 0, 00:04:33.047 "crdt3": 0 00:04:33.047 } 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "method": "nvmf_create_transport", 00:04:33.047 "params": { 00:04:33.047 "trtype": "TCP", 00:04:33.047 "max_queue_depth": 128, 00:04:33.047 "max_io_qpairs_per_ctrlr": 127, 00:04:33.047 "in_capsule_data_size": 4096, 00:04:33.047 "max_io_size": 131072, 00:04:33.047 "io_unit_size": 131072, 00:04:33.047 "max_aq_depth": 128, 00:04:33.047 "num_shared_buffers": 511, 00:04:33.047 "buf_cache_size": 4294967295, 00:04:33.047 "dif_insert_or_strip": false, 00:04:33.047 "zcopy": false, 00:04:33.047 "c2h_success": true, 00:04:33.047 "sock_priority": 0, 00:04:33.047 "abort_timeout_sec": 1, 00:04:33.047 "ack_timeout": 0, 00:04:33.047 "data_wr_pool_size": 0 00:04:33.047 } 00:04:33.047 } 00:04:33.047 ] 00:04:33.047 }, 00:04:33.047 { 00:04:33.047 "subsystem": "iscsi", 00:04:33.047 "config": [ 00:04:33.047 { 00:04:33.047 "method": "iscsi_set_options", 00:04:33.047 "params": { 00:04:33.047 "node_base": "iqn.2016-06.io.spdk", 00:04:33.047 "max_sessions": 128, 00:04:33.047 "max_connections_per_session": 2, 00:04:33.047 "max_queue_depth": 64, 00:04:33.047 "default_time2wait": 2, 00:04:33.047 "default_time2retain": 20, 00:04:33.047 "first_burst_length": 8192, 00:04:33.047 "immediate_data": true, 00:04:33.047 "allow_duplicated_isid": false, 00:04:33.047 "error_recovery_level": 0, 00:04:33.047 "nop_timeout": 60, 00:04:33.047 "nop_in_interval": 30, 00:04:33.047 "disable_chap": false, 00:04:33.047 "require_chap": false, 00:04:33.047 "mutual_chap": false, 00:04:33.047 "chap_group": 0, 00:04:33.047 "max_large_datain_per_connection": 64, 00:04:33.047 "max_r2t_per_connection": 4, 00:04:33.047 "pdu_pool_size": 36864, 00:04:33.047 "immediate_data_pool_size": 16384, 00:04:33.047 "data_out_pool_size": 2048 00:04:33.047 } 00:04:33.047 } 00:04:33.047 ] 00:04:33.047 } 00:04:33.047 ] 00:04:33.047 } 00:04:33.048 14:06:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:33.048 14:06:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1239427 00:04:33.048 14:06:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1239427 ']' 00:04:33.048 14:06:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1239427 00:04:33.048 14:06:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:33.048 14:06:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:33.048 14:06:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1239427 00:04:33.048 14:06:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:33.048 14:06:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:33.048 14:06:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1239427' 00:04:33.048 killing process with pid 1239427 00:04:33.048 14:06:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1239427 00:04:33.048 14:06:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1239427 00:04:35.578 14:06:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1239963 00:04:35.578 14:06:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:35.578 14:06:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:40.841 14:06:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1239963 00:04:40.841 14:06:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1239963 ']' 00:04:40.841 14:06:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1239963 00:04:40.841 14:06:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:40.841 14:06:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:40.841 14:06:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1239963 00:04:40.841 14:06:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:40.841 14:06:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:40.841 14:06:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1239963' 00:04:40.841 killing process with pid 1239963 00:04:40.841 14:06:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1239963 00:04:40.841 14:06:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1239963 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:43.365 00:04:43.365 real 0m11.548s 00:04:43.365 user 0m11.010s 00:04:43.365 sys 0m1.042s 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.365 ************************************ 00:04:43.365 END TEST skip_rpc_with_json 00:04:43.365 ************************************ 00:04:43.365 14:06:52 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:43.365 14:06:52 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:43.365 14:06:52 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.365 14:06:52 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.365 14:06:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.365 ************************************ 00:04:43.365 START TEST skip_rpc_with_delay 00:04:43.365 ************************************ 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:43.365 [2024-07-10 14:06:52.486675] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:43.365 [2024-07-10 14:06:52.486858] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:43.365 00:04:43.365 real 0m0.142s 00:04:43.365 user 0m0.078s 00:04:43.365 sys 0m0.064s 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.365 14:06:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:43.365 ************************************ 00:04:43.365 END TEST skip_rpc_with_delay 00:04:43.365 ************************************ 00:04:43.365 14:06:52 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:43.365 14:06:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:43.365 14:06:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:43.365 14:06:52 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:43.365 14:06:52 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.365 14:06:52 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.365 14:06:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.365 ************************************ 00:04:43.365 START TEST exit_on_failed_rpc_init 00:04:43.365 ************************************ 00:04:43.365 14:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:43.365 14:06:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1240830 00:04:43.365 14:06:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.365 14:06:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1240830 00:04:43.365 14:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1240830 ']' 00:04:43.365 14:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.365 14:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.365 14:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.365 14:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.365 14:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:43.365 [2024-07-10 14:06:52.678622] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:04:43.365 [2024-07-10 14:06:52.678813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1240830 ] 00:04:43.365 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.365 [2024-07-10 14:06:52.803790] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.625 [2024-07-10 14:06:53.056264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.558 14:06:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.558 14:06:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:44.558 14:06:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.558 14:06:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:44.558 14:06:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:44.558 14:06:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:44.558 14:06:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.558 14:06:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.558 14:06:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.558 14:06:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.558 14:06:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.558 14:06:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.558 14:06:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.558 14:06:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:44.558 14:06:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:44.558 [2024-07-10 14:06:54.026694] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:04:44.559 [2024-07-10 14:06:54.026835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241086 ] 00:04:44.817 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.817 [2024-07-10 14:06:54.153413] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.075 [2024-07-10 14:06:54.408048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.075 [2024-07-10 14:06:54.408211] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:45.075 [2024-07-10 14:06:54.408247] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:45.075 [2024-07-10 14:06:54.408273] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:45.641 14:06:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:45.641 14:06:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:45.641 14:06:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:45.641 14:06:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:45.641 14:06:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:45.641 14:06:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:45.641 14:06:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:45.641 14:06:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1240830 00:04:45.641 14:06:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1240830 ']' 00:04:45.641 14:06:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1240830 00:04:45.641 14:06:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:45.641 14:06:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.641 14:06:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1240830 00:04:45.641 14:06:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:45.641 14:06:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:45.641 14:06:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1240830' 00:04:45.641 killing process with pid 1240830 00:04:45.641 14:06:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1240830 00:04:45.641 14:06:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1240830 00:04:48.167 00:04:48.167 real 0m4.830s 00:04:48.167 user 0m5.528s 00:04:48.167 sys 0m0.736s 00:04:48.167 14:06:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.167 14:06:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:48.167 ************************************ 00:04:48.167 END TEST exit_on_failed_rpc_init 00:04:48.167 ************************************ 00:04:48.167 14:06:57 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:48.167 14:06:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:48.167 00:04:48.167 real 0m24.304s 00:04:48.167 user 0m23.766s 00:04:48.167 sys 0m2.475s 00:04:48.168 14:06:57 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.168 14:06:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.168 ************************************ 00:04:48.168 END TEST skip_rpc 00:04:48.168 ************************************ 00:04:48.168 14:06:57 -- common/autotest_common.sh@1142 -- # return 0 00:04:48.168 14:06:57 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:48.168 14:06:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.168 14:06:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.168 14:06:57 -- common/autotest_common.sh@10 -- # set +x 00:04:48.168 ************************************ 00:04:48.168 START TEST rpc_client 00:04:48.168 ************************************ 00:04:48.168 14:06:57 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:48.168 * Looking for test storage... 00:04:48.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:48.168 14:06:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:48.168 OK 00:04:48.168 14:06:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:48.168 00:04:48.168 real 0m0.100s 00:04:48.168 user 0m0.044s 00:04:48.168 sys 0m0.061s 00:04:48.168 14:06:57 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.168 14:06:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:48.168 ************************************ 00:04:48.168 END TEST rpc_client 00:04:48.168 ************************************ 00:04:48.168 14:06:57 -- common/autotest_common.sh@1142 -- # return 0 00:04:48.168 14:06:57 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:48.168 14:06:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.168 14:06:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.168 14:06:57 -- common/autotest_common.sh@10 -- # set +x 00:04:48.168 ************************************ 00:04:48.168 START TEST json_config 00:04:48.168 ************************************ 00:04:48.168 14:06:57 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:48.426 14:06:57 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:48.426 14:06:57 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:48.426 14:06:57 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:48.426 14:06:57 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:48.426 14:06:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.426 14:06:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.426 14:06:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.426 14:06:57 json_config -- paths/export.sh@5 -- # export PATH 00:04:48.426 14:06:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@47 -- # : 0 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:48.426 14:06:57 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:48.426 14:06:57 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:48.426 14:06:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:48.426 14:06:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:48.426 14:06:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:48.426 14:06:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:48.426 14:06:57 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:48.426 14:06:57 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:48.426 14:06:57 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:48.426 14:06:57 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:48.426 14:06:57 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:48.426 14:06:57 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:48.426 14:06:57 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:48.426 14:06:57 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:48.426 14:06:57 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:48.426 14:06:57 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:48.426 14:06:57 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:48.426 INFO: JSON configuration test init 00:04:48.426 14:06:57 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:48.426 14:06:57 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:48.426 14:06:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:48.426 14:06:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.426 14:06:57 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:48.426 14:06:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:48.426 14:06:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.426 14:06:57 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:48.426 14:06:57 json_config -- json_config/common.sh@9 -- # local app=target 00:04:48.426 14:06:57 json_config -- json_config/common.sh@10 -- # shift 00:04:48.426 14:06:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:48.426 14:06:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:48.426 14:06:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:48.426 14:06:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.426 14:06:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.426 14:06:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1241606 00:04:48.426 14:06:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:48.426 Waiting for target to run... 00:04:48.426 14:06:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:48.426 14:06:57 json_config -- json_config/common.sh@25 -- # waitforlisten 1241606 /var/tmp/spdk_tgt.sock 00:04:48.426 14:06:57 json_config -- common/autotest_common.sh@829 -- # '[' -z 1241606 ']' 00:04:48.426 14:06:57 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:48.426 14:06:57 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.426 14:06:57 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:48.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:48.426 14:06:57 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.426 14:06:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.426 [2024-07-10 14:06:57.762948] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:04:48.426 [2024-07-10 14:06:57.763114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241606 ] 00:04:48.426 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.991 [2024-07-10 14:06:58.335370] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.248 [2024-07-10 14:06:58.572652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.248 14:06:58 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:49.248 14:06:58 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:49.248 14:06:58 json_config -- json_config/common.sh@26 -- # echo '' 00:04:49.248 00:04:49.248 14:06:58 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:49.248 14:06:58 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:49.248 14:06:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.248 14:06:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.248 14:06:58 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:49.248 14:06:58 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:49.248 14:06:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:49.248 14:06:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.248 14:06:58 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:49.248 14:06:58 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:49.248 14:06:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:53.429 14:07:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.429 14:07:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:53.429 14:07:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:53.429 14:07:02 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:53.429 14:07:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:53.429 14:07:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.429 14:07:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:53.429 14:07:02 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:53.429 14:07:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:53.699 MallocForNvmf0 00:04:53.699 14:07:03 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:53.699 14:07:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:53.960 MallocForNvmf1 00:04:53.960 14:07:03 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:53.960 14:07:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:54.218 [2024-07-10 14:07:03.517467] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:54.218 14:07:03 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:54.218 14:07:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:54.476 14:07:03 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:54.476 14:07:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:54.733 14:07:04 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:54.733 14:07:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:54.991 14:07:04 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:54.991 14:07:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:55.249 [2024-07-10 14:07:04.484793] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:55.249 14:07:04 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:55.249 14:07:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:55.249 14:07:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.249 14:07:04 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:55.249 14:07:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:55.249 14:07:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.249 14:07:04 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:55.249 14:07:04 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:55.249 14:07:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:55.513 MallocBdevForConfigChangeCheck 00:04:55.513 14:07:04 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:55.513 14:07:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:55.513 14:07:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.513 14:07:04 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:55.513 14:07:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:55.823 14:07:05 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:55.823 INFO: shutting down applications... 00:04:55.823 14:07:05 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:55.823 14:07:05 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:55.823 14:07:05 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:55.823 14:07:05 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:57.747 Calling clear_iscsi_subsystem 00:04:57.747 Calling clear_nvmf_subsystem 00:04:57.747 Calling clear_nbd_subsystem 00:04:57.748 Calling clear_ublk_subsystem 00:04:57.748 Calling clear_vhost_blk_subsystem 00:04:57.748 Calling clear_vhost_scsi_subsystem 00:04:57.748 Calling clear_bdev_subsystem 00:04:57.748 14:07:06 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:57.748 14:07:06 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:57.748 14:07:06 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:57.748 14:07:06 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:57.748 14:07:06 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:57.748 14:07:06 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:58.006 14:07:07 json_config -- json_config/json_config.sh@345 -- # break 00:04:58.006 14:07:07 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:58.006 14:07:07 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:58.006 14:07:07 json_config -- json_config/common.sh@31 -- # local app=target 00:04:58.006 14:07:07 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:58.006 14:07:07 json_config -- json_config/common.sh@35 -- # [[ -n 1241606 ]] 00:04:58.006 14:07:07 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1241606 00:04:58.006 14:07:07 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:58.006 14:07:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.006 14:07:07 json_config -- json_config/common.sh@41 -- # kill -0 1241606 00:04:58.006 14:07:07 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:58.263 14:07:07 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:58.263 14:07:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.263 14:07:07 json_config -- json_config/common.sh@41 -- # kill -0 1241606 00:04:58.263 14:07:07 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:58.830 14:07:08 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:58.830 14:07:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.830 14:07:08 json_config -- json_config/common.sh@41 -- # kill -0 1241606 00:04:58.830 14:07:08 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:59.397 14:07:08 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:59.397 14:07:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.397 14:07:08 json_config -- json_config/common.sh@41 -- # kill -0 1241606 00:04:59.397 14:07:08 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:59.397 14:07:08 json_config -- json_config/common.sh@43 -- # break 00:04:59.397 14:07:08 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:59.397 14:07:08 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:59.397 SPDK target shutdown done 00:04:59.397 14:07:08 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:59.397 INFO: relaunching applications... 00:04:59.397 14:07:08 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:59.397 14:07:08 json_config -- json_config/common.sh@9 -- # local app=target 00:04:59.397 14:07:08 json_config -- json_config/common.sh@10 -- # shift 00:04:59.397 14:07:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:59.397 14:07:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:59.397 14:07:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:59.397 14:07:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.397 14:07:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.397 14:07:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1243063 00:04:59.397 14:07:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:59.397 Waiting for target to run... 00:04:59.397 14:07:08 json_config -- json_config/common.sh@25 -- # waitforlisten 1243063 /var/tmp/spdk_tgt.sock 00:04:59.397 14:07:08 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:59.397 14:07:08 json_config -- common/autotest_common.sh@829 -- # '[' -z 1243063 ']' 00:04:59.397 14:07:08 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:59.397 14:07:08 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.397 14:07:08 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:59.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:59.397 14:07:08 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.397 14:07:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.397 [2024-07-10 14:07:08.839045] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:04:59.397 [2024-07-10 14:07:08.839206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243063 ] 00:04:59.656 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.221 [2024-07-10 14:07:09.416643] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.221 [2024-07-10 14:07:09.638193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.406 [2024-07-10 14:07:13.327822] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:04.406 [2024-07-10 14:07:13.360333] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:04.664 14:07:13 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.664 14:07:13 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:04.664 14:07:13 json_config -- json_config/common.sh@26 -- # echo '' 00:05:04.664 00:05:04.664 14:07:13 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:04.664 14:07:13 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:04.664 INFO: Checking if target configuration is the same... 00:05:04.664 14:07:13 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.664 14:07:13 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:04.664 14:07:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:04.664 + '[' 2 -ne 2 ']' 00:05:04.664 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:04.664 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:04.664 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:04.664 +++ basename /dev/fd/62 00:05:04.664 ++ mktemp /tmp/62.XXX 00:05:04.664 + tmp_file_1=/tmp/62.hzG 00:05:04.664 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.664 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:04.664 + tmp_file_2=/tmp/spdk_tgt_config.json.Eju 00:05:04.664 + ret=0 00:05:04.664 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:04.922 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:04.922 + diff -u /tmp/62.hzG /tmp/spdk_tgt_config.json.Eju 00:05:04.922 + echo 'INFO: JSON config files are the same' 00:05:04.922 INFO: JSON config files are the same 00:05:04.922 + rm /tmp/62.hzG /tmp/spdk_tgt_config.json.Eju 00:05:04.922 + exit 0 00:05:04.922 14:07:14 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:04.922 14:07:14 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:04.922 INFO: changing configuration and checking if this can be detected... 00:05:04.922 14:07:14 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:04.922 14:07:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:05.179 14:07:14 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.179 14:07:14 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:05.179 14:07:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:05.179 + '[' 2 -ne 2 ']' 00:05:05.179 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:05.179 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:05.179 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:05.179 +++ basename /dev/fd/62 00:05:05.179 ++ mktemp /tmp/62.XXX 00:05:05.179 + tmp_file_1=/tmp/62.y61 00:05:05.179 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.179 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:05.179 + tmp_file_2=/tmp/spdk_tgt_config.json.dOD 00:05:05.179 + ret=0 00:05:05.179 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.746 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.746 + diff -u /tmp/62.y61 /tmp/spdk_tgt_config.json.dOD 00:05:05.746 + ret=1 00:05:05.746 + echo '=== Start of file: /tmp/62.y61 ===' 00:05:05.746 + cat /tmp/62.y61 00:05:05.746 + echo '=== End of file: /tmp/62.y61 ===' 00:05:05.746 + echo '' 00:05:05.746 + echo '=== Start of file: /tmp/spdk_tgt_config.json.dOD ===' 00:05:05.746 + cat /tmp/spdk_tgt_config.json.dOD 00:05:05.746 + echo '=== End of file: /tmp/spdk_tgt_config.json.dOD ===' 00:05:05.746 + echo '' 00:05:05.746 + rm /tmp/62.y61 /tmp/spdk_tgt_config.json.dOD 00:05:05.746 + exit 1 00:05:05.746 14:07:15 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:05.746 INFO: configuration change detected. 00:05:05.746 14:07:15 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:05.746 14:07:15 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:05.746 14:07:15 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:05.746 14:07:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.746 14:07:15 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:05.746 14:07:15 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:05.746 14:07:15 json_config -- json_config/json_config.sh@317 -- # [[ -n 1243063 ]] 00:05:05.746 14:07:15 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:05.746 14:07:15 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:05.746 14:07:15 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:05.746 14:07:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.746 14:07:15 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:05.746 14:07:15 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:05.746 14:07:15 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:05.746 14:07:15 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:05.746 14:07:15 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:05.746 14:07:15 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:05.746 14:07:15 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:05.746 14:07:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.746 14:07:15 json_config -- json_config/json_config.sh@323 -- # killprocess 1243063 00:05:05.746 14:07:15 json_config -- common/autotest_common.sh@948 -- # '[' -z 1243063 ']' 00:05:05.746 14:07:15 json_config -- common/autotest_common.sh@952 -- # kill -0 1243063 00:05:05.746 14:07:15 json_config -- common/autotest_common.sh@953 -- # uname 00:05:05.746 14:07:15 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.746 14:07:15 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1243063 00:05:05.746 14:07:15 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:05.746 14:07:15 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:05.746 14:07:15 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1243063' 00:05:05.746 killing process with pid 1243063 00:05:05.746 14:07:15 json_config -- common/autotest_common.sh@967 -- # kill 1243063 00:05:05.746 14:07:15 json_config -- common/autotest_common.sh@972 -- # wait 1243063 00:05:08.281 14:07:17 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:08.281 14:07:17 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:08.281 14:07:17 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:08.281 14:07:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.281 14:07:17 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:08.281 14:07:17 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:08.281 INFO: Success 00:05:08.281 00:05:08.281 real 0m20.003s 00:05:08.281 user 0m21.389s 00:05:08.281 sys 0m2.626s 00:05:08.281 14:07:17 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.281 14:07:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.281 ************************************ 00:05:08.281 END TEST json_config 00:05:08.281 ************************************ 00:05:08.281 14:07:17 -- common/autotest_common.sh@1142 -- # return 0 00:05:08.281 14:07:17 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:08.281 14:07:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.281 14:07:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.281 14:07:17 -- common/autotest_common.sh@10 -- # set +x 00:05:08.281 ************************************ 00:05:08.281 START TEST json_config_extra_key 00:05:08.281 ************************************ 00:05:08.281 14:07:17 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:08.281 14:07:17 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:08.281 14:07:17 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:08.281 14:07:17 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:08.281 14:07:17 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:08.281 14:07:17 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:08.281 14:07:17 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:08.281 14:07:17 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:08.281 14:07:17 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:08.281 14:07:17 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:08.281 14:07:17 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:08.281 14:07:17 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:08.281 14:07:17 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:08.281 14:07:17 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:08.281 14:07:17 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:08.281 14:07:17 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:08.281 14:07:17 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:08.281 14:07:17 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:08.281 14:07:17 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:08.281 14:07:17 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:08.281 14:07:17 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:08.281 14:07:17 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:08.281 14:07:17 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:08.281 14:07:17 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.281 14:07:17 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.281 14:07:17 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.282 14:07:17 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:08.282 14:07:17 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.282 14:07:17 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:08.282 14:07:17 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:08.282 14:07:17 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:08.282 14:07:17 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:08.282 14:07:17 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:08.282 14:07:17 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:08.282 14:07:17 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:08.282 14:07:17 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:08.282 14:07:17 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:08.282 14:07:17 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:08.282 14:07:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:08.282 14:07:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:08.282 14:07:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:08.282 14:07:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:08.282 14:07:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:08.282 14:07:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:08.282 14:07:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:08.282 14:07:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:08.282 14:07:17 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:08.282 14:07:17 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:08.282 INFO: launching applications... 00:05:08.282 14:07:17 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:08.282 14:07:17 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:08.282 14:07:17 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:08.282 14:07:17 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:08.282 14:07:17 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:08.282 14:07:17 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:08.282 14:07:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.282 14:07:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.282 14:07:17 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1244247 00:05:08.282 14:07:17 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:08.282 14:07:17 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:08.282 Waiting for target to run... 00:05:08.282 14:07:17 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1244247 /var/tmp/spdk_tgt.sock 00:05:08.282 14:07:17 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1244247 ']' 00:05:08.282 14:07:17 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:08.282 14:07:17 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.282 14:07:17 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:08.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:08.282 14:07:17 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.282 14:07:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:08.541 [2024-07-10 14:07:17.818629] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:05:08.541 [2024-07-10 14:07:17.818784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244247 ] 00:05:08.541 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.813 [2024-07-10 14:07:18.235142] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.071 [2024-07-10 14:07:18.460521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.005 14:07:19 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.005 14:07:19 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:10.005 14:07:19 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:10.005 00:05:10.005 14:07:19 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:10.005 INFO: shutting down applications... 00:05:10.005 14:07:19 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:10.005 14:07:19 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:10.005 14:07:19 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:10.005 14:07:19 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1244247 ]] 00:05:10.005 14:07:19 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1244247 00:05:10.005 14:07:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:10.005 14:07:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.005 14:07:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1244247 00:05:10.005 14:07:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:10.263 14:07:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:10.263 14:07:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.263 14:07:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1244247 00:05:10.263 14:07:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:10.829 14:07:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:10.829 14:07:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.829 14:07:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1244247 00:05:10.829 14:07:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.393 14:07:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.393 14:07:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.393 14:07:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1244247 00:05:11.393 14:07:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.958 14:07:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.958 14:07:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.958 14:07:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1244247 00:05:11.958 14:07:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:12.216 14:07:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:12.216 14:07:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.216 14:07:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1244247 00:05:12.216 14:07:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:12.781 14:07:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:12.781 14:07:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.781 14:07:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1244247 00:05:12.781 14:07:22 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:12.781 14:07:22 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:12.781 14:07:22 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:12.781 14:07:22 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:12.781 SPDK target shutdown done 00:05:12.781 14:07:22 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:12.781 Success 00:05:12.781 00:05:12.781 real 0m4.476s 00:05:12.781 user 0m4.227s 00:05:12.781 sys 0m0.589s 00:05:12.781 14:07:22 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.781 14:07:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:12.781 ************************************ 00:05:12.781 END TEST json_config_extra_key 00:05:12.781 ************************************ 00:05:12.781 14:07:22 -- common/autotest_common.sh@1142 -- # return 0 00:05:12.781 14:07:22 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:12.781 14:07:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.781 14:07:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.781 14:07:22 -- common/autotest_common.sh@10 -- # set +x 00:05:12.781 ************************************ 00:05:12.781 START TEST alias_rpc 00:05:12.781 ************************************ 00:05:12.781 14:07:22 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:12.781 * Looking for test storage... 00:05:12.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:12.781 14:07:22 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:12.781 14:07:22 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1244830 00:05:12.781 14:07:22 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.781 14:07:22 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1244830 00:05:12.781 14:07:22 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1244830 ']' 00:05:12.781 14:07:22 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.781 14:07:22 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.781 14:07:22 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.781 14:07:22 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.781 14:07:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.039 [2024-07-10 14:07:22.338553] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:05:13.039 [2024-07-10 14:07:22.338703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244830 ] 00:05:13.039 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.039 [2024-07-10 14:07:22.466665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.298 [2024-07-10 14:07:22.727223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.231 14:07:23 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.231 14:07:23 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:14.231 14:07:23 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:14.489 14:07:23 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1244830 00:05:14.489 14:07:23 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1244830 ']' 00:05:14.489 14:07:23 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1244830 00:05:14.489 14:07:23 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:14.489 14:07:23 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:14.489 14:07:23 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1244830 00:05:14.489 14:07:23 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:14.489 14:07:23 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:14.489 14:07:23 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1244830' 00:05:14.489 killing process with pid 1244830 00:05:14.489 14:07:23 alias_rpc -- common/autotest_common.sh@967 -- # kill 1244830 00:05:14.489 14:07:23 alias_rpc -- common/autotest_common.sh@972 -- # wait 1244830 00:05:17.020 00:05:17.020 real 0m4.293s 00:05:17.020 user 0m4.384s 00:05:17.020 sys 0m0.624s 00:05:17.020 14:07:26 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.020 14:07:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.020 ************************************ 00:05:17.020 END TEST alias_rpc 00:05:17.020 ************************************ 00:05:17.279 14:07:26 -- common/autotest_common.sh@1142 -- # return 0 00:05:17.279 14:07:26 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:17.279 14:07:26 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:17.279 14:07:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.279 14:07:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.279 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:05:17.279 ************************************ 00:05:17.279 START TEST spdkcli_tcp 00:05:17.279 ************************************ 00:05:17.279 14:07:26 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:17.279 * Looking for test storage... 00:05:17.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:17.279 14:07:26 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:17.279 14:07:26 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:17.279 14:07:26 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:17.279 14:07:26 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:17.279 14:07:26 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:17.279 14:07:26 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:17.279 14:07:26 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:17.279 14:07:26 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:17.279 14:07:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.279 14:07:26 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1245325 00:05:17.279 14:07:26 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:17.279 14:07:26 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1245325 00:05:17.279 14:07:26 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1245325 ']' 00:05:17.279 14:07:26 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.279 14:07:26 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.279 14:07:26 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.279 14:07:26 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.279 14:07:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.279 [2024-07-10 14:07:26.701084] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:05:17.279 [2024-07-10 14:07:26.701237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1245325 ] 00:05:17.538 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.538 [2024-07-10 14:07:26.824863] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.796 [2024-07-10 14:07:27.084334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.796 [2024-07-10 14:07:27.084342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.732 14:07:27 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.732 14:07:27 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:18.732 14:07:27 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1245558 00:05:18.732 14:07:27 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:18.732 14:07:27 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:18.732 [ 00:05:18.732 "bdev_malloc_delete", 00:05:18.732 "bdev_malloc_create", 00:05:18.732 "bdev_null_resize", 00:05:18.732 "bdev_null_delete", 00:05:18.732 "bdev_null_create", 00:05:18.732 "bdev_nvme_cuse_unregister", 00:05:18.732 "bdev_nvme_cuse_register", 00:05:18.732 "bdev_opal_new_user", 00:05:18.732 "bdev_opal_set_lock_state", 00:05:18.732 "bdev_opal_delete", 00:05:18.732 "bdev_opal_get_info", 00:05:18.732 "bdev_opal_create", 00:05:18.732 "bdev_nvme_opal_revert", 00:05:18.732 "bdev_nvme_opal_init", 00:05:18.732 "bdev_nvme_send_cmd", 00:05:18.732 "bdev_nvme_get_path_iostat", 00:05:18.732 "bdev_nvme_get_mdns_discovery_info", 00:05:18.732 "bdev_nvme_stop_mdns_discovery", 00:05:18.732 "bdev_nvme_start_mdns_discovery", 00:05:18.732 "bdev_nvme_set_multipath_policy", 00:05:18.732 "bdev_nvme_set_preferred_path", 00:05:18.732 "bdev_nvme_get_io_paths", 00:05:18.732 "bdev_nvme_remove_error_injection", 00:05:18.732 "bdev_nvme_add_error_injection", 00:05:18.732 "bdev_nvme_get_discovery_info", 00:05:18.732 "bdev_nvme_stop_discovery", 00:05:18.732 "bdev_nvme_start_discovery", 00:05:18.732 "bdev_nvme_get_controller_health_info", 00:05:18.732 "bdev_nvme_disable_controller", 00:05:18.732 "bdev_nvme_enable_controller", 00:05:18.732 "bdev_nvme_reset_controller", 00:05:18.732 "bdev_nvme_get_transport_statistics", 00:05:18.732 "bdev_nvme_apply_firmware", 00:05:18.732 "bdev_nvme_detach_controller", 00:05:18.732 "bdev_nvme_get_controllers", 00:05:18.732 "bdev_nvme_attach_controller", 00:05:18.732 "bdev_nvme_set_hotplug", 00:05:18.732 "bdev_nvme_set_options", 00:05:18.732 "bdev_passthru_delete", 00:05:18.732 "bdev_passthru_create", 00:05:18.732 "bdev_lvol_set_parent_bdev", 00:05:18.732 "bdev_lvol_set_parent", 00:05:18.732 "bdev_lvol_check_shallow_copy", 00:05:18.732 "bdev_lvol_start_shallow_copy", 00:05:18.732 "bdev_lvol_grow_lvstore", 00:05:18.732 "bdev_lvol_get_lvols", 00:05:18.732 "bdev_lvol_get_lvstores", 00:05:18.732 "bdev_lvol_delete", 00:05:18.732 "bdev_lvol_set_read_only", 00:05:18.732 "bdev_lvol_resize", 00:05:18.732 "bdev_lvol_decouple_parent", 00:05:18.732 "bdev_lvol_inflate", 00:05:18.732 "bdev_lvol_rename", 00:05:18.732 "bdev_lvol_clone_bdev", 00:05:18.732 "bdev_lvol_clone", 00:05:18.732 "bdev_lvol_snapshot", 00:05:18.732 "bdev_lvol_create", 00:05:18.732 "bdev_lvol_delete_lvstore", 00:05:18.732 "bdev_lvol_rename_lvstore", 00:05:18.732 "bdev_lvol_create_lvstore", 00:05:18.732 "bdev_raid_set_options", 00:05:18.732 "bdev_raid_remove_base_bdev", 00:05:18.732 "bdev_raid_add_base_bdev", 00:05:18.732 "bdev_raid_delete", 00:05:18.732 "bdev_raid_create", 00:05:18.732 "bdev_raid_get_bdevs", 00:05:18.732 "bdev_error_inject_error", 00:05:18.732 "bdev_error_delete", 00:05:18.732 "bdev_error_create", 00:05:18.732 "bdev_split_delete", 00:05:18.732 "bdev_split_create", 00:05:18.732 "bdev_delay_delete", 00:05:18.732 "bdev_delay_create", 00:05:18.732 "bdev_delay_update_latency", 00:05:18.732 "bdev_zone_block_delete", 00:05:18.732 "bdev_zone_block_create", 00:05:18.732 "blobfs_create", 00:05:18.732 "blobfs_detect", 00:05:18.732 "blobfs_set_cache_size", 00:05:18.732 "bdev_aio_delete", 00:05:18.732 "bdev_aio_rescan", 00:05:18.732 "bdev_aio_create", 00:05:18.732 "bdev_ftl_set_property", 00:05:18.732 "bdev_ftl_get_properties", 00:05:18.732 "bdev_ftl_get_stats", 00:05:18.732 "bdev_ftl_unmap", 00:05:18.732 "bdev_ftl_unload", 00:05:18.732 "bdev_ftl_delete", 00:05:18.732 "bdev_ftl_load", 00:05:18.732 "bdev_ftl_create", 00:05:18.732 "bdev_virtio_attach_controller", 00:05:18.732 "bdev_virtio_scsi_get_devices", 00:05:18.732 "bdev_virtio_detach_controller", 00:05:18.732 "bdev_virtio_blk_set_hotplug", 00:05:18.732 "bdev_iscsi_delete", 00:05:18.732 "bdev_iscsi_create", 00:05:18.732 "bdev_iscsi_set_options", 00:05:18.732 "accel_error_inject_error", 00:05:18.732 "ioat_scan_accel_module", 00:05:18.732 "dsa_scan_accel_module", 00:05:18.732 "iaa_scan_accel_module", 00:05:18.732 "keyring_file_remove_key", 00:05:18.732 "keyring_file_add_key", 00:05:18.732 "keyring_linux_set_options", 00:05:18.732 "iscsi_get_histogram", 00:05:18.732 "iscsi_enable_histogram", 00:05:18.732 "iscsi_set_options", 00:05:18.732 "iscsi_get_auth_groups", 00:05:18.732 "iscsi_auth_group_remove_secret", 00:05:18.732 "iscsi_auth_group_add_secret", 00:05:18.732 "iscsi_delete_auth_group", 00:05:18.732 "iscsi_create_auth_group", 00:05:18.732 "iscsi_set_discovery_auth", 00:05:18.732 "iscsi_get_options", 00:05:18.732 "iscsi_target_node_request_logout", 00:05:18.732 "iscsi_target_node_set_redirect", 00:05:18.732 "iscsi_target_node_set_auth", 00:05:18.732 "iscsi_target_node_add_lun", 00:05:18.732 "iscsi_get_stats", 00:05:18.732 "iscsi_get_connections", 00:05:18.732 "iscsi_portal_group_set_auth", 00:05:18.732 "iscsi_start_portal_group", 00:05:18.732 "iscsi_delete_portal_group", 00:05:18.732 "iscsi_create_portal_group", 00:05:18.732 "iscsi_get_portal_groups", 00:05:18.732 "iscsi_delete_target_node", 00:05:18.732 "iscsi_target_node_remove_pg_ig_maps", 00:05:18.732 "iscsi_target_node_add_pg_ig_maps", 00:05:18.732 "iscsi_create_target_node", 00:05:18.732 "iscsi_get_target_nodes", 00:05:18.732 "iscsi_delete_initiator_group", 00:05:18.732 "iscsi_initiator_group_remove_initiators", 00:05:18.732 "iscsi_initiator_group_add_initiators", 00:05:18.732 "iscsi_create_initiator_group", 00:05:18.732 "iscsi_get_initiator_groups", 00:05:18.732 "nvmf_set_crdt", 00:05:18.732 "nvmf_set_config", 00:05:18.732 "nvmf_set_max_subsystems", 00:05:18.732 "nvmf_stop_mdns_prr", 00:05:18.732 "nvmf_publish_mdns_prr", 00:05:18.732 "nvmf_subsystem_get_listeners", 00:05:18.732 "nvmf_subsystem_get_qpairs", 00:05:18.732 "nvmf_subsystem_get_controllers", 00:05:18.732 "nvmf_get_stats", 00:05:18.732 "nvmf_get_transports", 00:05:18.732 "nvmf_create_transport", 00:05:18.732 "nvmf_get_targets", 00:05:18.732 "nvmf_delete_target", 00:05:18.732 "nvmf_create_target", 00:05:18.732 "nvmf_subsystem_allow_any_host", 00:05:18.732 "nvmf_subsystem_remove_host", 00:05:18.732 "nvmf_subsystem_add_host", 00:05:18.732 "nvmf_ns_remove_host", 00:05:18.732 "nvmf_ns_add_host", 00:05:18.732 "nvmf_subsystem_remove_ns", 00:05:18.732 "nvmf_subsystem_add_ns", 00:05:18.732 "nvmf_subsystem_listener_set_ana_state", 00:05:18.732 "nvmf_discovery_get_referrals", 00:05:18.732 "nvmf_discovery_remove_referral", 00:05:18.732 "nvmf_discovery_add_referral", 00:05:18.732 "nvmf_subsystem_remove_listener", 00:05:18.732 "nvmf_subsystem_add_listener", 00:05:18.732 "nvmf_delete_subsystem", 00:05:18.732 "nvmf_create_subsystem", 00:05:18.732 "nvmf_get_subsystems", 00:05:18.732 "env_dpdk_get_mem_stats", 00:05:18.732 "nbd_get_disks", 00:05:18.732 "nbd_stop_disk", 00:05:18.732 "nbd_start_disk", 00:05:18.732 "ublk_recover_disk", 00:05:18.732 "ublk_get_disks", 00:05:18.732 "ublk_stop_disk", 00:05:18.732 "ublk_start_disk", 00:05:18.732 "ublk_destroy_target", 00:05:18.732 "ublk_create_target", 00:05:18.732 "virtio_blk_create_transport", 00:05:18.732 "virtio_blk_get_transports", 00:05:18.732 "vhost_controller_set_coalescing", 00:05:18.732 "vhost_get_controllers", 00:05:18.732 "vhost_delete_controller", 00:05:18.732 "vhost_create_blk_controller", 00:05:18.733 "vhost_scsi_controller_remove_target", 00:05:18.733 "vhost_scsi_controller_add_target", 00:05:18.733 "vhost_start_scsi_controller", 00:05:18.733 "vhost_create_scsi_controller", 00:05:18.733 "thread_set_cpumask", 00:05:18.733 "framework_get_governor", 00:05:18.733 "framework_get_scheduler", 00:05:18.733 "framework_set_scheduler", 00:05:18.733 "framework_get_reactors", 00:05:18.733 "thread_get_io_channels", 00:05:18.733 "thread_get_pollers", 00:05:18.733 "thread_get_stats", 00:05:18.733 "framework_monitor_context_switch", 00:05:18.733 "spdk_kill_instance", 00:05:18.733 "log_enable_timestamps", 00:05:18.733 "log_get_flags", 00:05:18.733 "log_clear_flag", 00:05:18.733 "log_set_flag", 00:05:18.733 "log_get_level", 00:05:18.733 "log_set_level", 00:05:18.733 "log_get_print_level", 00:05:18.733 "log_set_print_level", 00:05:18.733 "framework_enable_cpumask_locks", 00:05:18.733 "framework_disable_cpumask_locks", 00:05:18.733 "framework_wait_init", 00:05:18.733 "framework_start_init", 00:05:18.733 "scsi_get_devices", 00:05:18.733 "bdev_get_histogram", 00:05:18.733 "bdev_enable_histogram", 00:05:18.733 "bdev_set_qos_limit", 00:05:18.733 "bdev_set_qd_sampling_period", 00:05:18.733 "bdev_get_bdevs", 00:05:18.733 "bdev_reset_iostat", 00:05:18.733 "bdev_get_iostat", 00:05:18.733 "bdev_examine", 00:05:18.733 "bdev_wait_for_examine", 00:05:18.733 "bdev_set_options", 00:05:18.733 "notify_get_notifications", 00:05:18.733 "notify_get_types", 00:05:18.733 "accel_get_stats", 00:05:18.733 "accel_set_options", 00:05:18.733 "accel_set_driver", 00:05:18.733 "accel_crypto_key_destroy", 00:05:18.733 "accel_crypto_keys_get", 00:05:18.733 "accel_crypto_key_create", 00:05:18.733 "accel_assign_opc", 00:05:18.733 "accel_get_module_info", 00:05:18.733 "accel_get_opc_assignments", 00:05:18.733 "vmd_rescan", 00:05:18.733 "vmd_remove_device", 00:05:18.733 "vmd_enable", 00:05:18.733 "sock_get_default_impl", 00:05:18.733 "sock_set_default_impl", 00:05:18.733 "sock_impl_set_options", 00:05:18.733 "sock_impl_get_options", 00:05:18.733 "iobuf_get_stats", 00:05:18.733 "iobuf_set_options", 00:05:18.733 "framework_get_pci_devices", 00:05:18.733 "framework_get_config", 00:05:18.733 "framework_get_subsystems", 00:05:18.733 "trace_get_info", 00:05:18.733 "trace_get_tpoint_group_mask", 00:05:18.733 "trace_disable_tpoint_group", 00:05:18.733 "trace_enable_tpoint_group", 00:05:18.733 "trace_clear_tpoint_mask", 00:05:18.733 "trace_set_tpoint_mask", 00:05:18.733 "keyring_get_keys", 00:05:18.733 "spdk_get_version", 00:05:18.733 "rpc_get_methods" 00:05:18.733 ] 00:05:18.733 14:07:28 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:18.733 14:07:28 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.733 14:07:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.991 14:07:28 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:18.991 14:07:28 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1245325 00:05:18.991 14:07:28 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1245325 ']' 00:05:18.991 14:07:28 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1245325 00:05:18.991 14:07:28 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:18.991 14:07:28 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.991 14:07:28 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1245325 00:05:18.991 14:07:28 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.991 14:07:28 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.991 14:07:28 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1245325' 00:05:18.991 killing process with pid 1245325 00:05:18.991 14:07:28 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1245325 00:05:18.991 14:07:28 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1245325 00:05:21.516 00:05:21.516 real 0m4.073s 00:05:21.516 user 0m7.123s 00:05:21.516 sys 0m0.689s 00:05:21.516 14:07:30 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.516 14:07:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.516 ************************************ 00:05:21.516 END TEST spdkcli_tcp 00:05:21.516 ************************************ 00:05:21.516 14:07:30 -- common/autotest_common.sh@1142 -- # return 0 00:05:21.516 14:07:30 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:21.516 14:07:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.516 14:07:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.516 14:07:30 -- common/autotest_common.sh@10 -- # set +x 00:05:21.516 ************************************ 00:05:21.517 START TEST dpdk_mem_utility 00:05:21.517 ************************************ 00:05:21.517 14:07:30 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:21.517 * Looking for test storage... 00:05:21.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:21.517 14:07:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:21.517 14:07:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1245895 00:05:21.517 14:07:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.517 14:07:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1245895 00:05:21.517 14:07:30 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1245895 ']' 00:05:21.517 14:07:30 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.517 14:07:30 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.517 14:07:30 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.517 14:07:30 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.517 14:07:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:21.517 [2024-07-10 14:07:30.819367] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:05:21.517 [2024-07-10 14:07:30.819545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1245895 ] 00:05:21.517 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.517 [2024-07-10 14:07:30.938984] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.775 [2024-07-10 14:07:31.192820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.709 14:07:32 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.709 14:07:32 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:22.709 14:07:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:22.709 14:07:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:22.709 14:07:32 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.709 14:07:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:22.709 { 00:05:22.709 "filename": "/tmp/spdk_mem_dump.txt" 00:05:22.709 } 00:05:22.709 14:07:32 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.709 14:07:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:22.709 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:22.709 1 heaps totaling size 820.000000 MiB 00:05:22.709 size: 820.000000 MiB heap id: 0 00:05:22.709 end heaps---------- 00:05:22.709 8 mempools totaling size 598.116089 MiB 00:05:22.709 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:22.709 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:22.709 size: 84.521057 MiB name: bdev_io_1245895 00:05:22.709 size: 51.011292 MiB name: evtpool_1245895 00:05:22.709 size: 50.003479 MiB name: msgpool_1245895 00:05:22.709 size: 21.763794 MiB name: PDU_Pool 00:05:22.709 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:22.709 size: 0.026123 MiB name: Session_Pool 00:05:22.709 end mempools------- 00:05:22.709 6 memzones totaling size 4.142822 MiB 00:05:22.709 size: 1.000366 MiB name: RG_ring_0_1245895 00:05:22.709 size: 1.000366 MiB name: RG_ring_1_1245895 00:05:22.709 size: 1.000366 MiB name: RG_ring_4_1245895 00:05:22.709 size: 1.000366 MiB name: RG_ring_5_1245895 00:05:22.709 size: 0.125366 MiB name: RG_ring_2_1245895 00:05:22.709 size: 0.015991 MiB name: RG_ring_3_1245895 00:05:22.709 end memzones------- 00:05:22.709 14:07:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:22.967 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:05:22.967 list of free elements. size: 18.514832 MiB 00:05:22.967 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:22.967 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:22.967 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:22.967 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:22.967 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:22.967 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:22.967 element at address: 0x200019600000 with size: 0.999329 MiB 00:05:22.967 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:22.967 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:22.967 element at address: 0x200018e00000 with size: 0.959900 MiB 00:05:22.967 element at address: 0x200019900040 with size: 0.937256 MiB 00:05:22.967 element at address: 0x200000200000 with size: 0.840942 MiB 00:05:22.967 element at address: 0x20001b000000 with size: 0.583191 MiB 00:05:22.967 element at address: 0x200019200000 with size: 0.491150 MiB 00:05:22.967 element at address: 0x200019a00000 with size: 0.485657 MiB 00:05:22.967 element at address: 0x200013800000 with size: 0.470581 MiB 00:05:22.967 element at address: 0x200028400000 with size: 0.411072 MiB 00:05:22.967 element at address: 0x200003a00000 with size: 0.356140 MiB 00:05:22.967 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:05:22.967 list of standard malloc elements. size: 199.220764 MiB 00:05:22.967 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:22.967 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:22.967 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:22.967 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:22.967 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:22.967 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:22.967 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:22.967 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:22.967 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:05:22.967 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:05:22.967 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:05:22.967 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:05:22.967 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:05:22.967 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:22.967 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:22.967 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:22.967 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:22.967 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:22.967 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:22.967 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:22.967 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:05:22.967 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:05:22.967 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:05:22.967 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:05:22.967 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:05:22.967 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:05:22.967 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:22.967 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:22.967 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:22.967 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:22.967 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:05:22.967 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:05:22.967 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:05:22.967 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:05:22.967 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:05:22.967 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:05:22.967 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:05:22.967 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:05:22.967 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:22.967 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:22.967 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:22.967 list of memzone associated elements. size: 602.264404 MiB 00:05:22.967 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:22.967 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:22.967 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:22.967 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:22.967 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:22.967 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1245895_0 00:05:22.968 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:22.968 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1245895_0 00:05:22.968 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:22.968 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1245895_0 00:05:22.968 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:22.968 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:22.968 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:22.968 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:22.968 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:22.968 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1245895 00:05:22.968 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:22.968 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1245895 00:05:22.968 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:22.968 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1245895 00:05:22.968 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:22.968 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:22.968 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:22.968 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:22.968 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:22.968 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:22.968 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:22.968 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:22.968 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:22.968 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1245895 00:05:22.968 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:22.968 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1245895 00:05:22.968 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:22.968 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1245895 00:05:22.968 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:22.968 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1245895 00:05:22.968 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:22.968 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1245895 00:05:22.968 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:05:22.968 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:22.968 element at address: 0x200013878780 with size: 0.500549 MiB 00:05:22.968 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:22.968 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:05:22.968 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:22.968 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:22.968 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1245895 00:05:22.968 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:05:22.968 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:22.968 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:05:22.968 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:22.968 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:22.968 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1245895 00:05:22.968 element at address: 0x20002846f540 with size: 0.002502 MiB 00:05:22.968 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:22.968 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:05:22.968 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1245895 00:05:22.968 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:22.968 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1245895 00:05:22.968 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:05:22.968 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:22.968 14:07:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:22.968 14:07:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1245895 00:05:22.968 14:07:32 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1245895 ']' 00:05:22.968 14:07:32 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1245895 00:05:22.968 14:07:32 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:22.968 14:07:32 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.968 14:07:32 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1245895 00:05:22.968 14:07:32 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.968 14:07:32 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.968 14:07:32 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1245895' 00:05:22.968 killing process with pid 1245895 00:05:22.968 14:07:32 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1245895 00:05:22.968 14:07:32 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1245895 00:05:25.494 00:05:25.494 real 0m4.105s 00:05:25.494 user 0m4.129s 00:05:25.494 sys 0m0.629s 00:05:25.494 14:07:34 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.494 14:07:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.494 ************************************ 00:05:25.494 END TEST dpdk_mem_utility 00:05:25.494 ************************************ 00:05:25.494 14:07:34 -- common/autotest_common.sh@1142 -- # return 0 00:05:25.494 14:07:34 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:25.494 14:07:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.494 14:07:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.494 14:07:34 -- common/autotest_common.sh@10 -- # set +x 00:05:25.494 ************************************ 00:05:25.495 START TEST event 00:05:25.495 ************************************ 00:05:25.495 14:07:34 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:25.495 * Looking for test storage... 00:05:25.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:25.495 14:07:34 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:25.495 14:07:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:25.495 14:07:34 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.495 14:07:34 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:25.495 14:07:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.495 14:07:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.495 ************************************ 00:05:25.495 START TEST event_perf 00:05:25.495 ************************************ 00:05:25.495 14:07:34 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.495 Running I/O for 1 seconds...[2024-07-10 14:07:34.932010] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:05:25.495 [2024-07-10 14:07:34.932124] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246481 ] 00:05:25.752 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.752 [2024-07-10 14:07:35.051121] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:26.011 [2024-07-10 14:07:35.309519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.011 [2024-07-10 14:07:35.309577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.011 [2024-07-10 14:07:35.309613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.011 [2024-07-10 14:07:35.309598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.459 Running I/O for 1 seconds... 00:05:27.459 lcore 0: 194038 00:05:27.459 lcore 1: 194037 00:05:27.459 lcore 2: 194037 00:05:27.459 lcore 3: 194038 00:05:27.459 done. 00:05:27.459 00:05:27.459 real 0m1.876s 00:05:27.459 user 0m4.692s 00:05:27.459 sys 0m0.169s 00:05:27.459 14:07:36 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.459 14:07:36 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.459 ************************************ 00:05:27.459 END TEST event_perf 00:05:27.459 ************************************ 00:05:27.459 14:07:36 event -- common/autotest_common.sh@1142 -- # return 0 00:05:27.459 14:07:36 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:27.459 14:07:36 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:27.459 14:07:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.459 14:07:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.459 ************************************ 00:05:27.459 START TEST event_reactor 00:05:27.459 ************************************ 00:05:27.459 14:07:36 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:27.459 [2024-07-10 14:07:36.860138] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:05:27.459 [2024-07-10 14:07:36.860271] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246697 ] 00:05:27.717 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.717 [2024-07-10 14:07:37.004597] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.975 [2024-07-10 14:07:37.265096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.348 test_start 00:05:29.348 oneshot 00:05:29.348 tick 100 00:05:29.348 tick 100 00:05:29.348 tick 250 00:05:29.348 tick 100 00:05:29.348 tick 100 00:05:29.348 tick 100 00:05:29.348 tick 250 00:05:29.348 tick 500 00:05:29.348 tick 100 00:05:29.348 tick 100 00:05:29.349 tick 250 00:05:29.349 tick 100 00:05:29.349 tick 100 00:05:29.349 test_end 00:05:29.349 00:05:29.349 real 0m1.895s 00:05:29.349 user 0m1.733s 00:05:29.349 sys 0m0.153s 00:05:29.349 14:07:38 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.349 14:07:38 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:29.349 ************************************ 00:05:29.349 END TEST event_reactor 00:05:29.349 ************************************ 00:05:29.349 14:07:38 event -- common/autotest_common.sh@1142 -- # return 0 00:05:29.349 14:07:38 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:29.349 14:07:38 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:29.349 14:07:38 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.349 14:07:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.349 ************************************ 00:05:29.349 START TEST event_reactor_perf 00:05:29.349 ************************************ 00:05:29.349 14:07:38 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:29.349 [2024-07-10 14:07:38.804766] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:05:29.349 [2024-07-10 14:07:38.804881] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246928 ] 00:05:29.607 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.607 [2024-07-10 14:07:38.935363] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.865 [2024-07-10 14:07:39.196889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.239 test_start 00:05:31.239 test_end 00:05:31.239 Performance: 266905 events per second 00:05:31.239 00:05:31.239 real 0m1.883s 00:05:31.239 user 0m1.706s 00:05:31.239 sys 0m0.166s 00:05:31.239 14:07:40 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.239 14:07:40 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.239 ************************************ 00:05:31.239 END TEST event_reactor_perf 00:05:31.239 ************************************ 00:05:31.239 14:07:40 event -- common/autotest_common.sh@1142 -- # return 0 00:05:31.239 14:07:40 event -- event/event.sh@49 -- # uname -s 00:05:31.239 14:07:40 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:31.239 14:07:40 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:31.239 14:07:40 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.239 14:07:40 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.239 14:07:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.239 ************************************ 00:05:31.239 START TEST event_scheduler 00:05:31.239 ************************************ 00:05:31.239 14:07:40 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:31.498 * Looking for test storage... 00:05:31.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:31.498 14:07:40 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:31.498 14:07:40 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1247238 00:05:31.498 14:07:40 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:31.498 14:07:40 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.498 14:07:40 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1247238 00:05:31.498 14:07:40 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1247238 ']' 00:05:31.498 14:07:40 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.499 14:07:40 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.499 14:07:40 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.499 14:07:40 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.499 14:07:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.499 [2024-07-10 14:07:40.827602] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:05:31.499 [2024-07-10 14:07:40.827748] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247238 ] 00:05:31.499 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.499 [2024-07-10 14:07:40.949936] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:31.756 [2024-07-10 14:07:41.169416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.756 [2024-07-10 14:07:41.169485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.756 [2024-07-10 14:07:41.169521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.756 [2024-07-10 14:07:41.169526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.322 14:07:41 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.323 14:07:41 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:32.323 14:07:41 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:32.323 14:07:41 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.323 14:07:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.323 [2024-07-10 14:07:41.756224] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:32.323 [2024-07-10 14:07:41.756296] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:32.323 [2024-07-10 14:07:41.756332] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:32.323 [2024-07-10 14:07:41.756355] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:32.323 [2024-07-10 14:07:41.756372] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:32.323 14:07:41 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.323 14:07:41 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:32.323 14:07:41 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.323 14:07:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.581 [2024-07-10 14:07:42.055649] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:32.581 14:07:42 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.581 14:07:42 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:32.581 14:07:42 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.581 14:07:42 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.581 14:07:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.840 ************************************ 00:05:32.840 START TEST scheduler_create_thread 00:05:32.840 ************************************ 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.840 2 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.840 3 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.840 4 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.840 5 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.840 6 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.840 7 00:05:32.840 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.841 8 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.841 9 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.841 10 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.841 00:05:32.841 real 0m0.110s 00:05:32.841 user 0m0.010s 00:05:32.841 sys 0m0.003s 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.841 14:07:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.841 ************************************ 00:05:32.841 END TEST scheduler_create_thread 00:05:32.841 ************************************ 00:05:32.841 14:07:42 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:32.841 14:07:42 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:32.841 14:07:42 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1247238 00:05:32.841 14:07:42 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1247238 ']' 00:05:32.841 14:07:42 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1247238 00:05:32.841 14:07:42 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:32.841 14:07:42 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.841 14:07:42 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1247238 00:05:32.841 14:07:42 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:32.841 14:07:42 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:32.841 14:07:42 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1247238' 00:05:32.841 killing process with pid 1247238 00:05:32.841 14:07:42 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1247238 00:05:32.841 14:07:42 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1247238 00:05:33.407 [2024-07-10 14:07:42.678647] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:34.347 00:05:34.348 real 0m3.080s 00:05:34.348 user 0m4.984s 00:05:34.348 sys 0m0.486s 00:05:34.348 14:07:43 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.348 14:07:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.348 ************************************ 00:05:34.348 END TEST event_scheduler 00:05:34.348 ************************************ 00:05:34.348 14:07:43 event -- common/autotest_common.sh@1142 -- # return 0 00:05:34.348 14:07:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:34.348 14:07:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:34.348 14:07:43 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.348 14:07:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.348 14:07:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.605 ************************************ 00:05:34.605 START TEST app_repeat 00:05:34.605 ************************************ 00:05:34.605 14:07:43 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:34.605 14:07:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.605 14:07:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.605 14:07:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:34.605 14:07:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.605 14:07:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:34.605 14:07:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:34.605 14:07:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:34.606 14:07:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1247681 00:05:34.606 14:07:43 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:34.606 14:07:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.606 14:07:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1247681' 00:05:34.606 Process app_repeat pid: 1247681 00:05:34.606 14:07:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:34.606 14:07:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:34.606 spdk_app_start Round 0 00:05:34.606 14:07:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1247681 /var/tmp/spdk-nbd.sock 00:05:34.606 14:07:43 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1247681 ']' 00:05:34.606 14:07:43 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.606 14:07:43 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.606 14:07:43 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.606 14:07:43 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.606 14:07:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.606 [2024-07-10 14:07:43.879363] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:05:34.606 [2024-07-10 14:07:43.879540] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247681 ] 00:05:34.606 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.606 [2024-07-10 14:07:44.002188] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.864 [2024-07-10 14:07:44.252308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.864 [2024-07-10 14:07:44.252315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.431 14:07:44 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.431 14:07:44 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:35.431 14:07:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.690 Malloc0 00:05:35.948 14:07:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.207 Malloc1 00:05:36.207 14:07:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.207 14:07:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.207 14:07:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.207 14:07:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.207 14:07:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.207 14:07:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.207 14:07:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.207 14:07:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.207 14:07:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.207 14:07:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.207 14:07:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.207 14:07:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.207 14:07:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:36.207 14:07:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.207 14:07:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.207 14:07:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.465 /dev/nbd0 00:05:36.465 14:07:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.465 14:07:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.465 14:07:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:36.465 14:07:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:36.465 14:07:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:36.465 14:07:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:36.465 14:07:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:36.465 14:07:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:36.465 14:07:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:36.465 14:07:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:36.465 14:07:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.465 1+0 records in 00:05:36.465 1+0 records out 00:05:36.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222319 s, 18.4 MB/s 00:05:36.465 14:07:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.465 14:07:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:36.465 14:07:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.465 14:07:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:36.465 14:07:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:36.465 14:07:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.465 14:07:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.465 14:07:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.724 /dev/nbd1 00:05:36.724 14:07:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.724 14:07:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.724 14:07:46 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:36.724 14:07:46 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:36.724 14:07:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:36.724 14:07:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:36.724 14:07:46 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:36.725 14:07:46 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:36.725 14:07:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:36.725 14:07:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:36.725 14:07:46 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.725 1+0 records in 00:05:36.725 1+0 records out 00:05:36.725 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252925 s, 16.2 MB/s 00:05:36.725 14:07:46 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.725 14:07:46 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:36.725 14:07:46 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.725 14:07:46 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:36.725 14:07:46 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:36.725 14:07:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.725 14:07:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.725 14:07:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.725 14:07:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.725 14:07:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.984 { 00:05:36.984 "nbd_device": "/dev/nbd0", 00:05:36.984 "bdev_name": "Malloc0" 00:05:36.984 }, 00:05:36.984 { 00:05:36.984 "nbd_device": "/dev/nbd1", 00:05:36.984 "bdev_name": "Malloc1" 00:05:36.984 } 00:05:36.984 ]' 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.984 { 00:05:36.984 "nbd_device": "/dev/nbd0", 00:05:36.984 "bdev_name": "Malloc0" 00:05:36.984 }, 00:05:36.984 { 00:05:36.984 "nbd_device": "/dev/nbd1", 00:05:36.984 "bdev_name": "Malloc1" 00:05:36.984 } 00:05:36.984 ]' 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.984 /dev/nbd1' 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.984 /dev/nbd1' 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.984 256+0 records in 00:05:36.984 256+0 records out 00:05:36.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504425 s, 208 MB/s 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.984 256+0 records in 00:05:36.984 256+0 records out 00:05:36.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241278 s, 43.5 MB/s 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.984 256+0 records in 00:05:36.984 256+0 records out 00:05:36.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306024 s, 34.3 MB/s 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.984 14:07:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.243 14:07:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.243 14:07:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.243 14:07:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.243 14:07:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.243 14:07:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.243 14:07:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.243 14:07:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.243 14:07:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.243 14:07:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.243 14:07:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.502 14:07:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.502 14:07:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.502 14:07:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.502 14:07:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.502 14:07:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.502 14:07:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.502 14:07:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.502 14:07:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.502 14:07:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.502 14:07:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.502 14:07:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.760 14:07:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.760 14:07:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.760 14:07:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.018 14:07:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:38.018 14:07:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:38.018 14:07:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.018 14:07:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:38.018 14:07:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:38.018 14:07:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:38.018 14:07:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:38.018 14:07:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:38.018 14:07:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:38.018 14:07:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:38.276 14:07:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:39.652 [2024-07-10 14:07:49.113497] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.911 [2024-07-10 14:07:49.367888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.911 [2024-07-10 14:07:49.367889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.169 [2024-07-10 14:07:49.589501] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.169 [2024-07-10 14:07:49.589606] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.544 14:07:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:41.544 14:07:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:41.544 spdk_app_start Round 1 00:05:41.544 14:07:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1247681 /var/tmp/spdk-nbd.sock 00:05:41.544 14:07:50 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1247681 ']' 00:05:41.544 14:07:50 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.544 14:07:50 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.544 14:07:50 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.544 14:07:50 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.544 14:07:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.544 14:07:50 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.544 14:07:50 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:41.544 14:07:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.802 Malloc0 00:05:41.802 14:07:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.368 Malloc1 00:05:42.368 14:07:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.368 14:07:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.368 14:07:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.368 14:07:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:42.368 14:07:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.368 14:07:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:42.368 14:07:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.368 14:07:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.368 14:07:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.368 14:07:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:42.368 14:07:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.368 14:07:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:42.368 14:07:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:42.368 14:07:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:42.368 14:07:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.368 14:07:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:42.368 /dev/nbd0 00:05:42.368 14:07:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:42.369 14:07:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:42.369 14:07:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:42.369 14:07:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:42.369 14:07:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:42.369 14:07:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:42.369 14:07:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:42.369 14:07:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:42.369 14:07:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:42.369 14:07:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:42.369 14:07:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.369 1+0 records in 00:05:42.369 1+0 records out 00:05:42.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255432 s, 16.0 MB/s 00:05:42.369 14:07:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.369 14:07:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:42.369 14:07:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.369 14:07:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:42.369 14:07:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:42.369 14:07:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.369 14:07:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.369 14:07:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:42.627 /dev/nbd1 00:05:42.627 14:07:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:42.627 14:07:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:42.627 14:07:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:42.627 14:07:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:42.627 14:07:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:42.627 14:07:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:42.627 14:07:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:42.627 14:07:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:42.627 14:07:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:42.627 14:07:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:42.627 14:07:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.886 1+0 records in 00:05:42.886 1+0 records out 00:05:42.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023269 s, 17.6 MB/s 00:05:42.886 14:07:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.886 14:07:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:42.886 14:07:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.886 14:07:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:42.886 14:07:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:42.886 14:07:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.886 14:07:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.886 14:07:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.886 14:07:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.886 14:07:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.144 14:07:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.145 { 00:05:43.145 "nbd_device": "/dev/nbd0", 00:05:43.145 "bdev_name": "Malloc0" 00:05:43.145 }, 00:05:43.145 { 00:05:43.145 "nbd_device": "/dev/nbd1", 00:05:43.145 "bdev_name": "Malloc1" 00:05:43.145 } 00:05:43.145 ]' 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.145 { 00:05:43.145 "nbd_device": "/dev/nbd0", 00:05:43.145 "bdev_name": "Malloc0" 00:05:43.145 }, 00:05:43.145 { 00:05:43.145 "nbd_device": "/dev/nbd1", 00:05:43.145 "bdev_name": "Malloc1" 00:05:43.145 } 00:05:43.145 ]' 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.145 /dev/nbd1' 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.145 /dev/nbd1' 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.145 256+0 records in 00:05:43.145 256+0 records out 00:05:43.145 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501774 s, 209 MB/s 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.145 256+0 records in 00:05:43.145 256+0 records out 00:05:43.145 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274886 s, 38.1 MB/s 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.145 256+0 records in 00:05:43.145 256+0 records out 00:05:43.145 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281579 s, 37.2 MB/s 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.145 14:07:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:43.403 14:07:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:43.403 14:07:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:43.403 14:07:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:43.403 14:07:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.403 14:07:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.403 14:07:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:43.403 14:07:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.404 14:07:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.404 14:07:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.404 14:07:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:43.666 14:07:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:43.666 14:07:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:43.666 14:07:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:43.666 14:07:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.666 14:07:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.666 14:07:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:43.666 14:07:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.666 14:07:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.666 14:07:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.666 14:07:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.666 14:07:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.925 14:07:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:43.925 14:07:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:43.925 14:07:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.925 14:07:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:43.925 14:07:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:43.925 14:07:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.925 14:07:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:43.925 14:07:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:43.925 14:07:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:43.925 14:07:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:43.925 14:07:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:43.925 14:07:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:43.925 14:07:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.491 14:07:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:45.866 [2024-07-10 14:07:55.155355] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.125 [2024-07-10 14:07:55.395561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.125 [2024-07-10 14:07:55.395561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.383 [2024-07-10 14:07:55.609440] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.383 [2024-07-10 14:07:55.609540] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.316 14:07:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.316 14:07:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:47.316 spdk_app_start Round 2 00:05:47.316 14:07:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1247681 /var/tmp/spdk-nbd.sock 00:05:47.316 14:07:56 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1247681 ']' 00:05:47.316 14:07:56 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.316 14:07:56 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.316 14:07:56 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.316 14:07:56 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.316 14:07:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.574 14:07:57 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.574 14:07:57 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:47.574 14:07:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.832 Malloc0 00:05:48.090 14:07:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.349 Malloc1 00:05:48.349 14:07:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.349 14:07:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.349 14:07:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.349 14:07:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.349 14:07:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.349 14:07:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.350 14:07:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.350 14:07:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.350 14:07:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.350 14:07:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.350 14:07:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.350 14:07:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.350 14:07:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.350 14:07:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.350 14:07:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.350 14:07:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.608 /dev/nbd0 00:05:48.608 14:07:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.608 14:07:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.608 14:07:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:48.608 14:07:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:48.608 14:07:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:48.608 14:07:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:48.608 14:07:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:48.608 14:07:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:48.608 14:07:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:48.608 14:07:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:48.608 14:07:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.608 1+0 records in 00:05:48.608 1+0 records out 00:05:48.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206675 s, 19.8 MB/s 00:05:48.608 14:07:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.608 14:07:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:48.608 14:07:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.608 14:07:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:48.608 14:07:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:48.608 14:07:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.608 14:07:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.608 14:07:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.866 /dev/nbd1 00:05:48.866 14:07:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.866 14:07:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.866 14:07:58 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:48.866 14:07:58 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:48.866 14:07:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:48.866 14:07:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:48.866 14:07:58 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:48.866 14:07:58 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:48.866 14:07:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:48.866 14:07:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:48.866 14:07:58 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.866 1+0 records in 00:05:48.866 1+0 records out 00:05:48.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020476 s, 20.0 MB/s 00:05:48.866 14:07:58 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.866 14:07:58 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:48.866 14:07:58 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.866 14:07:58 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:48.866 14:07:58 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:48.866 14:07:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.866 14:07:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.866 14:07:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.866 14:07:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.866 14:07:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.125 14:07:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.125 { 00:05:49.125 "nbd_device": "/dev/nbd0", 00:05:49.125 "bdev_name": "Malloc0" 00:05:49.125 }, 00:05:49.125 { 00:05:49.125 "nbd_device": "/dev/nbd1", 00:05:49.125 "bdev_name": "Malloc1" 00:05:49.125 } 00:05:49.125 ]' 00:05:49.125 14:07:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.125 { 00:05:49.125 "nbd_device": "/dev/nbd0", 00:05:49.125 "bdev_name": "Malloc0" 00:05:49.125 }, 00:05:49.125 { 00:05:49.125 "nbd_device": "/dev/nbd1", 00:05:49.125 "bdev_name": "Malloc1" 00:05:49.125 } 00:05:49.125 ]' 00:05:49.125 14:07:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.126 /dev/nbd1' 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.126 /dev/nbd1' 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.126 256+0 records in 00:05:49.126 256+0 records out 00:05:49.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498252 s, 210 MB/s 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.126 256+0 records in 00:05:49.126 256+0 records out 00:05:49.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280731 s, 37.4 MB/s 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.126 256+0 records in 00:05:49.126 256+0 records out 00:05:49.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.03196 s, 32.8 MB/s 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.126 14:07:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.385 14:07:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.385 14:07:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.385 14:07:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.385 14:07:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.385 14:07:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.385 14:07:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.385 14:07:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.386 14:07:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.386 14:07:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.386 14:07:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.644 14:07:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.902 14:07:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.902 14:07:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.902 14:07:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.902 14:07:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.902 14:07:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.903 14:07:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.903 14:07:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.903 14:07:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.903 14:07:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.903 14:07:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.903 14:07:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.903 14:07:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.903 14:07:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.161 14:07:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.161 14:07:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.161 14:07:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.161 14:07:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.161 14:07:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.161 14:07:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.161 14:07:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.161 14:07:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.161 14:07:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.161 14:07:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.421 14:07:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:51.795 [2024-07-10 14:08:01.233305] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.053 [2024-07-10 14:08:01.487597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.053 [2024-07-10 14:08:01.487596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.311 [2024-07-10 14:08:01.709849] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.311 [2024-07-10 14:08:01.709943] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.681 14:08:02 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1247681 /var/tmp/spdk-nbd.sock 00:05:53.681 14:08:02 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1247681 ']' 00:05:53.681 14:08:02 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.681 14:08:02 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.681 14:08:02 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.681 14:08:02 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.681 14:08:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.681 14:08:03 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.681 14:08:03 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:53.681 14:08:03 event.app_repeat -- event/event.sh@39 -- # killprocess 1247681 00:05:53.681 14:08:03 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1247681 ']' 00:05:53.681 14:08:03 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1247681 00:05:53.681 14:08:03 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:53.681 14:08:03 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.681 14:08:03 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1247681 00:05:53.681 14:08:03 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.681 14:08:03 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.681 14:08:03 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1247681' 00:05:53.681 killing process with pid 1247681 00:05:53.681 14:08:03 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1247681 00:05:53.681 14:08:03 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1247681 00:05:55.062 spdk_app_start is called in Round 0. 00:05:55.062 Shutdown signal received, stop current app iteration 00:05:55.062 Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 reinitialization... 00:05:55.062 spdk_app_start is called in Round 1. 00:05:55.062 Shutdown signal received, stop current app iteration 00:05:55.062 Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 reinitialization... 00:05:55.062 spdk_app_start is called in Round 2. 00:05:55.062 Shutdown signal received, stop current app iteration 00:05:55.062 Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 reinitialization... 00:05:55.062 spdk_app_start is called in Round 3. 00:05:55.062 Shutdown signal received, stop current app iteration 00:05:55.062 14:08:04 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:55.062 14:08:04 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:55.062 00:05:55.062 real 0m20.543s 00:05:55.062 user 0m42.084s 00:05:55.062 sys 0m3.383s 00:05:55.062 14:08:04 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.062 14:08:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.062 ************************************ 00:05:55.062 END TEST app_repeat 00:05:55.062 ************************************ 00:05:55.062 14:08:04 event -- common/autotest_common.sh@1142 -- # return 0 00:05:55.062 14:08:04 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:55.062 14:08:04 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:55.062 14:08:04 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.062 14:08:04 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.062 14:08:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.062 ************************************ 00:05:55.062 START TEST cpu_locks 00:05:55.062 ************************************ 00:05:55.062 14:08:04 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:55.062 * Looking for test storage... 00:05:55.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:55.062 14:08:04 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:55.062 14:08:04 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:55.062 14:08:04 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:55.062 14:08:04 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:55.062 14:08:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.062 14:08:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.062 14:08:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.062 ************************************ 00:05:55.062 START TEST default_locks 00:05:55.062 ************************************ 00:05:55.062 14:08:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:55.062 14:08:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1250310 00:05:55.062 14:08:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.062 14:08:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1250310 00:05:55.062 14:08:04 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1250310 ']' 00:05:55.062 14:08:04 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.062 14:08:04 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.062 14:08:04 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.062 14:08:04 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.062 14:08:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.320 [2024-07-10 14:08:04.589261] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:05:55.320 [2024-07-10 14:08:04.589412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1250310 ] 00:05:55.320 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.320 [2024-07-10 14:08:04.734313] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.578 [2024-07-10 14:08:04.996809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.511 14:08:05 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.511 14:08:05 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:56.511 14:08:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1250310 00:05:56.511 14:08:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1250310 00:05:56.511 14:08:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.768 lslocks: write error 00:05:56.768 14:08:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1250310 00:05:56.768 14:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1250310 ']' 00:05:56.768 14:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1250310 00:05:56.768 14:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:56.768 14:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.768 14:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1250310 00:05:57.026 14:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.026 14:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.026 14:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1250310' 00:05:57.026 killing process with pid 1250310 00:05:57.026 14:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1250310 00:05:57.026 14:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1250310 00:05:59.679 14:08:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1250310 00:05:59.679 14:08:08 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1250310 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1250310 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1250310 ']' 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1250310) - No such process 00:05:59.680 ERROR: process (pid: 1250310) is no longer running 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:59.680 00:05:59.680 real 0m4.324s 00:05:59.680 user 0m4.278s 00:05:59.680 sys 0m0.765s 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.680 14:08:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.680 ************************************ 00:05:59.680 END TEST default_locks 00:05:59.680 ************************************ 00:05:59.680 14:08:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:59.680 14:08:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:59.680 14:08:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.680 14:08:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.680 14:08:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.680 ************************************ 00:05:59.680 START TEST default_locks_via_rpc 00:05:59.680 ************************************ 00:05:59.680 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:59.680 14:08:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1250867 00:05:59.680 14:08:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.680 14:08:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1250867 00:05:59.680 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1250867 ']' 00:05:59.680 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.680 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.680 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.680 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.680 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.680 [2024-07-10 14:08:08.968731] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:05:59.680 [2024-07-10 14:08:08.968869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1250867 ] 00:05:59.680 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.680 [2024-07-10 14:08:09.100245] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.938 [2024-07-10 14:08:09.358632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.875 14:08:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.875 14:08:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:00.875 14:08:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:00.875 14:08:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.875 14:08:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.875 14:08:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.875 14:08:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:00.875 14:08:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:00.875 14:08:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:00.875 14:08:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:00.875 14:08:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:00.875 14:08:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.875 14:08:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.875 14:08:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.875 14:08:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1250867 00:06:00.875 14:08:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1250867 00:06:00.875 14:08:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.133 14:08:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1250867 00:06:01.133 14:08:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1250867 ']' 00:06:01.133 14:08:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1250867 00:06:01.133 14:08:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:01.133 14:08:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.133 14:08:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1250867 00:06:01.133 14:08:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.133 14:08:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.133 14:08:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1250867' 00:06:01.133 killing process with pid 1250867 00:06:01.133 14:08:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1250867 00:06:01.133 14:08:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1250867 00:06:03.661 00:06:03.661 real 0m4.192s 00:06:03.661 user 0m4.163s 00:06:03.661 sys 0m0.719s 00:06:03.661 14:08:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.661 14:08:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.661 ************************************ 00:06:03.661 END TEST default_locks_via_rpc 00:06:03.661 ************************************ 00:06:03.661 14:08:13 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:03.661 14:08:13 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:03.661 14:08:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.661 14:08:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.661 14:08:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.661 ************************************ 00:06:03.661 START TEST non_locking_app_on_locked_coremask 00:06:03.661 ************************************ 00:06:03.661 14:08:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:03.661 14:08:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1251398 00:06:03.661 14:08:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.661 14:08:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1251398 /var/tmp/spdk.sock 00:06:03.661 14:08:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1251398 ']' 00:06:03.661 14:08:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.661 14:08:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.661 14:08:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.661 14:08:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.661 14:08:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.918 [2024-07-10 14:08:13.222533] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:06:03.918 [2024-07-10 14:08:13.222697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1251398 ] 00:06:03.918 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.918 [2024-07-10 14:08:13.355724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.175 [2024-07-10 14:08:13.612261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.106 14:08:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.106 14:08:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:05.106 14:08:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1251567 00:06:05.106 14:08:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:05.106 14:08:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1251567 /var/tmp/spdk2.sock 00:06:05.106 14:08:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1251567 ']' 00:06:05.106 14:08:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.106 14:08:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.106 14:08:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.106 14:08:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.106 14:08:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.363 [2024-07-10 14:08:14.601110] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:06:05.364 [2024-07-10 14:08:14.601269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1251567 ] 00:06:05.364 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.364 [2024-07-10 14:08:14.790105] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.364 [2024-07-10 14:08:14.790176] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.930 [2024-07-10 14:08:15.310944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.829 14:08:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.829 14:08:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:07.829 14:08:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1251398 00:06:07.829 14:08:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1251398 00:06:07.829 14:08:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.764 lslocks: write error 00:06:08.764 14:08:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1251398 00:06:08.764 14:08:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1251398 ']' 00:06:08.764 14:08:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1251398 00:06:08.764 14:08:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:08.764 14:08:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.764 14:08:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1251398 00:06:08.764 14:08:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:08.764 14:08:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:08.764 14:08:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1251398' 00:06:08.764 killing process with pid 1251398 00:06:08.764 14:08:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1251398 00:06:08.764 14:08:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1251398 00:06:14.028 14:08:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1251567 00:06:14.028 14:08:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1251567 ']' 00:06:14.028 14:08:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1251567 00:06:14.028 14:08:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:14.028 14:08:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.028 14:08:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1251567 00:06:14.028 14:08:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:14.028 14:08:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:14.028 14:08:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1251567' 00:06:14.028 killing process with pid 1251567 00:06:14.028 14:08:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1251567 00:06:14.028 14:08:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1251567 00:06:16.559 00:06:16.559 real 0m12.467s 00:06:16.559 user 0m12.804s 00:06:16.559 sys 0m1.569s 00:06:16.559 14:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.560 14:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.560 ************************************ 00:06:16.560 END TEST non_locking_app_on_locked_coremask 00:06:16.560 ************************************ 00:06:16.560 14:08:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:16.560 14:08:25 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:16.560 14:08:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.560 14:08:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.560 14:08:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.560 ************************************ 00:06:16.560 START TEST locking_app_on_unlocked_coremask 00:06:16.560 ************************************ 00:06:16.560 14:08:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:16.560 14:08:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1252930 00:06:16.560 14:08:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:16.560 14:08:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1252930 /var/tmp/spdk.sock 00:06:16.560 14:08:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1252930 ']' 00:06:16.560 14:08:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.560 14:08:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.560 14:08:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.560 14:08:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.560 14:08:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.560 [2024-07-10 14:08:25.721621] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:06:16.560 [2024-07-10 14:08:25.721780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1252930 ] 00:06:16.560 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.560 [2024-07-10 14:08:25.845905] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.560 [2024-07-10 14:08:25.845963] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.818 [2024-07-10 14:08:26.102672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.754 14:08:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.754 14:08:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:17.754 14:08:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1253069 00:06:17.754 14:08:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1253069 /var/tmp/spdk2.sock 00:06:17.754 14:08:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1253069 ']' 00:06:17.754 14:08:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.754 14:08:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:17.754 14:08:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.754 14:08:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.755 14:08:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.755 14:08:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.755 [2024-07-10 14:08:27.078593] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:06:17.755 [2024-07-10 14:08:27.078739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1253069 ] 00:06:17.755 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.013 [2024-07-10 14:08:27.265272] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.579 [2024-07-10 14:08:27.792543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.480 14:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.480 14:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:20.480 14:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1253069 00:06:20.480 14:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1253069 00:06:20.480 14:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.046 lslocks: write error 00:06:21.046 14:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1252930 00:06:21.046 14:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1252930 ']' 00:06:21.046 14:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1252930 00:06:21.046 14:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:21.046 14:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.047 14:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1252930 00:06:21.047 14:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.047 14:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.047 14:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1252930' 00:06:21.047 killing process with pid 1252930 00:06:21.047 14:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1252930 00:06:21.047 14:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1252930 00:06:26.312 14:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1253069 00:06:26.312 14:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1253069 ']' 00:06:26.312 14:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1253069 00:06:26.312 14:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:26.312 14:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:26.312 14:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1253069 00:06:26.312 14:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:26.312 14:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:26.312 14:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1253069' 00:06:26.312 killing process with pid 1253069 00:06:26.312 14:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1253069 00:06:26.312 14:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1253069 00:06:28.842 00:06:28.842 real 0m12.380s 00:06:28.842 user 0m12.727s 00:06:28.842 sys 0m1.479s 00:06:28.842 14:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.842 14:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.842 ************************************ 00:06:28.842 END TEST locking_app_on_unlocked_coremask 00:06:28.842 ************************************ 00:06:28.842 14:08:38 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:28.842 14:08:38 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:28.842 14:08:38 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.842 14:08:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.842 14:08:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.842 ************************************ 00:06:28.842 START TEST locking_app_on_locked_coremask 00:06:28.842 ************************************ 00:06:28.842 14:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:28.842 14:08:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1254380 00:06:28.842 14:08:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.842 14:08:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1254380 /var/tmp/spdk.sock 00:06:28.842 14:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1254380 ']' 00:06:28.842 14:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.842 14:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.842 14:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.842 14:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.842 14:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.842 [2024-07-10 14:08:38.155277] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:06:28.842 [2024-07-10 14:08:38.155458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254380 ] 00:06:28.842 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.842 [2024-07-10 14:08:38.283711] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.101 [2024-07-10 14:08:38.540965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.034 14:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.034 14:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:30.034 14:08:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1254566 00:06:30.034 14:08:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:30.034 14:08:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1254566 /var/tmp/spdk2.sock 00:06:30.034 14:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:30.034 14:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1254566 /var/tmp/spdk2.sock 00:06:30.034 14:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:30.034 14:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.034 14:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:30.034 14:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.034 14:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1254566 /var/tmp/spdk2.sock 00:06:30.034 14:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1254566 ']' 00:06:30.034 14:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.034 14:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.034 14:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.034 14:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.034 14:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.034 [2024-07-10 14:08:39.500864] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:06:30.034 [2024-07-10 14:08:39.501048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254566 ] 00:06:30.292 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.292 [2024-07-10 14:08:39.685623] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1254380 has claimed it. 00:06:30.292 [2024-07-10 14:08:39.685712] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:30.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1254566) - No such process 00:06:30.857 ERROR: process (pid: 1254566) is no longer running 00:06:30.857 14:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.857 14:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:30.857 14:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:30.857 14:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.857 14:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:30.857 14:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.857 14:08:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1254380 00:06:30.857 14:08:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1254380 00:06:30.857 14:08:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.114 lslocks: write error 00:06:31.114 14:08:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1254380 00:06:31.114 14:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1254380 ']' 00:06:31.114 14:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1254380 00:06:31.114 14:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:31.114 14:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.114 14:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1254380 00:06:31.114 14:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.114 14:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.114 14:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1254380' 00:06:31.114 killing process with pid 1254380 00:06:31.114 14:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1254380 00:06:31.114 14:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1254380 00:06:33.639 00:06:33.639 real 0m5.039s 00:06:33.639 user 0m5.320s 00:06:33.639 sys 0m0.907s 00:06:33.639 14:08:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.639 14:08:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.639 ************************************ 00:06:33.639 END TEST locking_app_on_locked_coremask 00:06:33.639 ************************************ 00:06:33.896 14:08:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:33.896 14:08:43 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:33.896 14:08:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.896 14:08:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.896 14:08:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.896 ************************************ 00:06:33.896 START TEST locking_overlapped_coremask 00:06:33.896 ************************************ 00:06:33.896 14:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:33.896 14:08:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1255000 00:06:33.896 14:08:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:33.896 14:08:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1255000 /var/tmp/spdk.sock 00:06:33.896 14:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1255000 ']' 00:06:33.896 14:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.896 14:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.896 14:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.896 14:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.896 14:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.896 [2024-07-10 14:08:43.251510] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:06:33.896 [2024-07-10 14:08:43.251667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255000 ] 00:06:33.896 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.154 [2024-07-10 14:08:43.383669] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.413 [2024-07-10 14:08:43.654980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.413 [2024-07-10 14:08:43.655036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.413 [2024-07-10 14:08:43.655044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.348 14:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.348 14:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:35.348 14:08:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1255142 00:06:35.348 14:08:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1255142 /var/tmp/spdk2.sock 00:06:35.348 14:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:35.348 14:08:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:35.348 14:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1255142 /var/tmp/spdk2.sock 00:06:35.348 14:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:35.348 14:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.348 14:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:35.348 14:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.348 14:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1255142 /var/tmp/spdk2.sock 00:06:35.348 14:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1255142 ']' 00:06:35.348 14:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.348 14:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.348 14:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.348 14:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.348 14:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.348 [2024-07-10 14:08:44.662023] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:06:35.348 [2024-07-10 14:08:44.662173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255142 ] 00:06:35.348 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.606 [2024-07-10 14:08:44.842782] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1255000 has claimed it. 00:06:35.606 [2024-07-10 14:08:44.842871] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:35.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1255142) - No such process 00:06:35.863 ERROR: process (pid: 1255142) is no longer running 00:06:35.863 14:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.863 14:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:35.863 14:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:35.863 14:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:35.863 14:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:35.863 14:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:35.863 14:08:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:35.863 14:08:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:35.863 14:08:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:35.863 14:08:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:35.863 14:08:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1255000 00:06:35.863 14:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1255000 ']' 00:06:35.863 14:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1255000 00:06:35.863 14:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:35.863 14:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.863 14:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1255000 00:06:36.164 14:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.164 14:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.164 14:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1255000' 00:06:36.164 killing process with pid 1255000 00:06:36.164 14:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1255000 00:06:36.164 14:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1255000 00:06:38.735 00:06:38.735 real 0m4.495s 00:06:38.735 user 0m11.590s 00:06:38.735 sys 0m0.785s 00:06:38.735 14:08:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.735 14:08:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.735 ************************************ 00:06:38.735 END TEST locking_overlapped_coremask 00:06:38.735 ************************************ 00:06:38.735 14:08:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:38.735 14:08:47 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:38.735 14:08:47 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.735 14:08:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.735 14:08:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.735 ************************************ 00:06:38.735 START TEST locking_overlapped_coremask_via_rpc 00:06:38.735 ************************************ 00:06:38.735 14:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:38.735 14:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1255578 00:06:38.735 14:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:38.735 14:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1255578 /var/tmp/spdk.sock 00:06:38.735 14:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1255578 ']' 00:06:38.735 14:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.735 14:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.735 14:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.735 14:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.735 14:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.735 [2024-07-10 14:08:47.797325] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:06:38.735 [2024-07-10 14:08:47.797494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255578 ] 00:06:38.735 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.735 [2024-07-10 14:08:47.928511] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.735 [2024-07-10 14:08:47.928561] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.735 [2024-07-10 14:08:48.192438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.735 [2024-07-10 14:08:48.192854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.735 [2024-07-10 14:08:48.192858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.670 14:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.670 14:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:39.670 14:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1255716 00:06:39.670 14:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:39.670 14:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1255716 /var/tmp/spdk2.sock 00:06:39.670 14:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1255716 ']' 00:06:39.670 14:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.670 14:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.670 14:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.671 14:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.671 14:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.929 [2024-07-10 14:08:49.195090] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:06:39.929 [2024-07-10 14:08:49.195248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255716 ] 00:06:39.929 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.929 [2024-07-10 14:08:49.390213] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.929 [2024-07-10 14:08:49.390273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.495 [2024-07-10 14:08:49.924006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.495 [2024-07-10 14:08:49.924058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.495 [2024-07-10 14:08:49.924063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.024 [2024-07-10 14:08:51.906610] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1255578 has claimed it. 00:06:43.024 request: 00:06:43.024 { 00:06:43.024 "method": "framework_enable_cpumask_locks", 00:06:43.024 "req_id": 1 00:06:43.024 } 00:06:43.024 Got JSON-RPC error response 00:06:43.024 response: 00:06:43.024 { 00:06:43.024 "code": -32603, 00:06:43.024 "message": "Failed to claim CPU core: 2" 00:06:43.024 } 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1255578 /var/tmp/spdk.sock 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1255578 ']' 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.024 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.025 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.025 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.025 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.025 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:43.025 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1255716 /var/tmp/spdk2.sock 00:06:43.025 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1255716 ']' 00:06:43.025 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.025 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.025 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.025 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.025 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.025 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.025 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:43.025 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:43.025 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:43.025 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:43.025 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:43.025 00:06:43.025 real 0m4.705s 00:06:43.025 user 0m1.511s 00:06:43.025 sys 0m0.263s 00:06:43.025 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.025 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.025 ************************************ 00:06:43.025 END TEST locking_overlapped_coremask_via_rpc 00:06:43.025 ************************************ 00:06:43.025 14:08:52 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:43.025 14:08:52 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:43.025 14:08:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1255578 ]] 00:06:43.025 14:08:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1255578 00:06:43.025 14:08:52 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1255578 ']' 00:06:43.025 14:08:52 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1255578 00:06:43.025 14:08:52 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:43.025 14:08:52 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.025 14:08:52 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1255578 00:06:43.025 14:08:52 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.025 14:08:52 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.025 14:08:52 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1255578' 00:06:43.025 killing process with pid 1255578 00:06:43.025 14:08:52 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1255578 00:06:43.025 14:08:52 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1255578 00:06:45.545 14:08:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1255716 ]] 00:06:45.545 14:08:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1255716 00:06:45.545 14:08:54 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1255716 ']' 00:06:45.545 14:08:54 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1255716 00:06:45.545 14:08:54 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:45.545 14:08:54 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.545 14:08:54 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1255716 00:06:45.545 14:08:54 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:45.545 14:08:54 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:45.545 14:08:54 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1255716' 00:06:45.545 killing process with pid 1255716 00:06:45.545 14:08:54 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1255716 00:06:45.545 14:08:54 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1255716 00:06:48.074 14:08:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:48.074 14:08:57 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:48.074 14:08:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1255578 ]] 00:06:48.074 14:08:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1255578 00:06:48.074 14:08:57 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1255578 ']' 00:06:48.074 14:08:57 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1255578 00:06:48.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1255578) - No such process 00:06:48.074 14:08:57 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1255578 is not found' 00:06:48.074 Process with pid 1255578 is not found 00:06:48.074 14:08:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1255716 ]] 00:06:48.074 14:08:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1255716 00:06:48.074 14:08:57 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1255716 ']' 00:06:48.074 14:08:57 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1255716 00:06:48.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1255716) - No such process 00:06:48.074 14:08:57 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1255716 is not found' 00:06:48.074 Process with pid 1255716 is not found 00:06:48.074 14:08:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:48.074 00:06:48.074 real 0m52.592s 00:06:48.074 user 1m26.988s 00:06:48.074 sys 0m7.770s 00:06:48.074 14:08:57 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.074 14:08:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.074 ************************************ 00:06:48.074 END TEST cpu_locks 00:06:48.074 ************************************ 00:06:48.074 14:08:57 event -- common/autotest_common.sh@1142 -- # return 0 00:06:48.074 00:06:48.074 real 1m22.216s 00:06:48.074 user 2m22.329s 00:06:48.074 sys 0m12.355s 00:06:48.074 14:08:57 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.074 14:08:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.074 ************************************ 00:06:48.074 END TEST event 00:06:48.074 ************************************ 00:06:48.074 14:08:57 -- common/autotest_common.sh@1142 -- # return 0 00:06:48.074 14:08:57 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:48.074 14:08:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.074 14:08:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.074 14:08:57 -- common/autotest_common.sh@10 -- # set +x 00:06:48.074 ************************************ 00:06:48.074 START TEST thread 00:06:48.074 ************************************ 00:06:48.074 14:08:57 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:48.074 * Looking for test storage... 00:06:48.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:48.074 14:08:57 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:48.074 14:08:57 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:48.074 14:08:57 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.074 14:08:57 thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.074 ************************************ 00:06:48.074 START TEST thread_poller_perf 00:06:48.074 ************************************ 00:06:48.074 14:08:57 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:48.074 [2024-07-10 14:08:57.203440] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:06:48.074 [2024-07-10 14:08:57.203573] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1256752 ] 00:06:48.074 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.074 [2024-07-10 14:08:57.338626] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.332 [2024-07-10 14:08:57.593982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.332 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:49.705 ====================================== 00:06:49.705 busy:2720252536 (cyc) 00:06:49.705 total_run_count: 282000 00:06:49.705 tsc_hz: 2700000000 (cyc) 00:06:49.705 ====================================== 00:06:49.705 poller_cost: 9646 (cyc), 3572 (nsec) 00:06:49.705 00:06:49.705 real 0m1.865s 00:06:49.705 user 0m1.696s 00:06:49.705 sys 0m0.160s 00:06:49.705 14:08:59 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.705 14:08:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:49.705 ************************************ 00:06:49.705 END TEST thread_poller_perf 00:06:49.705 ************************************ 00:06:49.705 14:08:59 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:49.705 14:08:59 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:49.705 14:08:59 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:49.705 14:08:59 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.705 14:08:59 thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.705 ************************************ 00:06:49.705 START TEST thread_poller_perf 00:06:49.705 ************************************ 00:06:49.705 14:08:59 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:49.705 [2024-07-10 14:08:59.116941] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:06:49.705 [2024-07-10 14:08:59.117082] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257033 ] 00:06:49.963 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.963 [2024-07-10 14:08:59.262839] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.221 [2024-07-10 14:08:59.519262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.221 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:51.596 ====================================== 00:06:51.596 busy:2705208362 (cyc) 00:06:51.596 total_run_count: 3698000 00:06:51.596 tsc_hz: 2700000000 (cyc) 00:06:51.596 ====================================== 00:06:51.596 poller_cost: 731 (cyc), 270 (nsec) 00:06:51.596 00:06:51.596 real 0m1.872s 00:06:51.596 user 0m1.692s 00:06:51.596 sys 0m0.170s 00:06:51.596 14:09:00 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.596 14:09:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:51.596 ************************************ 00:06:51.596 END TEST thread_poller_perf 00:06:51.596 ************************************ 00:06:51.596 14:09:00 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:51.596 14:09:00 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:51.596 00:06:51.596 real 0m3.886s 00:06:51.596 user 0m3.439s 00:06:51.596 sys 0m0.438s 00:06:51.596 14:09:00 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.596 14:09:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.597 ************************************ 00:06:51.597 END TEST thread 00:06:51.597 ************************************ 00:06:51.597 14:09:00 -- common/autotest_common.sh@1142 -- # return 0 00:06:51.597 14:09:00 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:51.597 14:09:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.597 14:09:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.597 14:09:00 -- common/autotest_common.sh@10 -- # set +x 00:06:51.597 ************************************ 00:06:51.597 START TEST accel 00:06:51.597 ************************************ 00:06:51.597 14:09:01 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:51.597 * Looking for test storage... 00:06:51.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:51.855 14:09:01 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:51.855 14:09:01 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:51.855 14:09:01 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:51.855 14:09:01 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1257422 00:06:51.855 14:09:01 accel -- accel/accel.sh@63 -- # waitforlisten 1257422 00:06:51.855 14:09:01 accel -- common/autotest_common.sh@829 -- # '[' -z 1257422 ']' 00:06:51.855 14:09:01 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.855 14:09:01 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:51.855 14:09:01 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:51.855 14:09:01 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.855 14:09:01 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.855 14:09:01 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.855 14:09:01 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.855 14:09:01 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.855 14:09:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.855 14:09:01 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.855 14:09:01 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.855 14:09:01 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.855 14:09:01 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:51.855 14:09:01 accel -- accel/accel.sh@41 -- # jq -r . 00:06:51.855 [2024-07-10 14:09:01.174551] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:06:51.855 [2024-07-10 14:09:01.174741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257422 ] 00:06:51.855 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.855 [2024-07-10 14:09:01.303178] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.113 [2024-07-10 14:09:01.550107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.046 14:09:02 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.046 14:09:02 accel -- common/autotest_common.sh@862 -- # return 0 00:06:53.046 14:09:02 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:53.046 14:09:02 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:53.046 14:09:02 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:53.046 14:09:02 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:53.046 14:09:02 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:53.046 14:09:02 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:53.046 14:09:02 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.046 14:09:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.046 14:09:02 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:53.046 14:09:02 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.046 14:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.046 14:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.046 14:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.046 14:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.046 14:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.046 14:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.046 14:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.046 14:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.046 14:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.046 14:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.046 14:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.046 14:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.046 14:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.046 14:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.046 14:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.046 14:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.046 14:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.046 14:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.046 14:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.046 14:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.046 14:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.046 14:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.046 14:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.046 14:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.047 14:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.047 14:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.047 14:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.047 14:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.047 14:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.047 14:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.047 14:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.047 14:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.047 14:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.047 14:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.047 14:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.047 14:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.047 14:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.047 14:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.047 14:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.047 14:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.047 14:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.047 14:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.047 14:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.047 14:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.047 14:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.047 14:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.047 14:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.047 14:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.047 14:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.047 14:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.047 14:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.047 14:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.047 14:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.047 14:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.047 14:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.047 14:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.047 14:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.047 14:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.047 14:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.047 14:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.047 14:09:02 accel -- accel/accel.sh@75 -- # killprocess 1257422 00:06:53.047 14:09:02 accel -- common/autotest_common.sh@948 -- # '[' -z 1257422 ']' 00:06:53.047 14:09:02 accel -- common/autotest_common.sh@952 -- # kill -0 1257422 00:06:53.047 14:09:02 accel -- common/autotest_common.sh@953 -- # uname 00:06:53.047 14:09:02 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.047 14:09:02 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1257422 00:06:53.047 14:09:02 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:53.047 14:09:02 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:53.047 14:09:02 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1257422' 00:06:53.047 killing process with pid 1257422 00:06:53.047 14:09:02 accel -- common/autotest_common.sh@967 -- # kill 1257422 00:06:53.047 14:09:02 accel -- common/autotest_common.sh@972 -- # wait 1257422 00:06:55.576 14:09:04 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:55.576 14:09:04 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:55.576 14:09:04 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:55.576 14:09:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.576 14:09:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.576 14:09:05 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:55.576 14:09:05 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:55.576 14:09:05 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:55.576 14:09:05 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.576 14:09:05 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.576 14:09:05 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.576 14:09:05 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.576 14:09:05 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.576 14:09:05 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:55.576 14:09:05 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:55.835 14:09:05 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.835 14:09:05 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:55.835 14:09:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.835 14:09:05 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:55.835 14:09:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:55.835 14:09:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.835 14:09:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.835 ************************************ 00:06:55.835 START TEST accel_missing_filename 00:06:55.835 ************************************ 00:06:55.835 14:09:05 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:55.835 14:09:05 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:55.835 14:09:05 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:55.835 14:09:05 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:55.835 14:09:05 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.835 14:09:05 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:55.835 14:09:05 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.835 14:09:05 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:55.835 14:09:05 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:55.835 14:09:05 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:55.835 14:09:05 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.835 14:09:05 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.835 14:09:05 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.835 14:09:05 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.835 14:09:05 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.835 14:09:05 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:55.835 14:09:05 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:55.835 [2024-07-10 14:09:05.150210] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:06:55.835 [2024-07-10 14:09:05.150347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257913 ] 00:06:55.835 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.835 [2024-07-10 14:09:05.295302] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.094 [2024-07-10 14:09:05.551845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.353 [2024-07-10 14:09:05.783727] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.919 [2024-07-10 14:09:06.342061] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:57.486 A filename is required. 00:06:57.486 14:09:06 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:57.486 14:09:06 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.486 14:09:06 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:57.486 14:09:06 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:57.486 14:09:06 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:57.486 14:09:06 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.486 00:06:57.486 real 0m1.694s 00:06:57.486 user 0m1.477s 00:06:57.486 sys 0m0.242s 00:06:57.486 14:09:06 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.486 14:09:06 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:57.486 ************************************ 00:06:57.486 END TEST accel_missing_filename 00:06:57.486 ************************************ 00:06:57.486 14:09:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.486 14:09:06 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:57.486 14:09:06 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:57.486 14:09:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.486 14:09:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.486 ************************************ 00:06:57.486 START TEST accel_compress_verify 00:06:57.486 ************************************ 00:06:57.486 14:09:06 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:57.486 14:09:06 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:57.486 14:09:06 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:57.486 14:09:06 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:57.486 14:09:06 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.486 14:09:06 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:57.486 14:09:06 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.486 14:09:06 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:57.486 14:09:06 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:57.486 14:09:06 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:57.486 14:09:06 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.486 14:09:06 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.486 14:09:06 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.486 14:09:06 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.486 14:09:06 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.486 14:09:06 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:57.486 14:09:06 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:57.486 [2024-07-10 14:09:06.897573] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:06:57.486 [2024-07-10 14:09:06.897731] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258191 ] 00:06:57.744 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.744 [2024-07-10 14:09:07.043761] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.002 [2024-07-10 14:09:07.309639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.260 [2024-07-10 14:09:07.543283] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.827 [2024-07-10 14:09:08.103086] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:59.085 00:06:59.085 Compression does not support the verify option, aborting. 00:06:59.085 14:09:08 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:59.085 14:09:08 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.085 14:09:08 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:59.085 14:09:08 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:59.085 14:09:08 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:59.085 14:09:08 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.085 00:06:59.085 real 0m1.710s 00:06:59.085 user 0m1.491s 00:06:59.085 sys 0m0.244s 00:06:59.085 14:09:08 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.085 14:09:08 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:59.085 ************************************ 00:06:59.085 END TEST accel_compress_verify 00:06:59.085 ************************************ 00:06:59.343 14:09:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.343 14:09:08 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:59.343 14:09:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:59.343 14:09:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.343 14:09:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.343 ************************************ 00:06:59.343 START TEST accel_wrong_workload 00:06:59.343 ************************************ 00:06:59.343 14:09:08 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:59.343 14:09:08 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:59.343 14:09:08 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:59.343 14:09:08 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:59.343 14:09:08 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.343 14:09:08 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:59.344 14:09:08 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.344 14:09:08 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:59.344 14:09:08 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:59.344 14:09:08 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:59.344 14:09:08 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.344 14:09:08 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.344 14:09:08 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.344 14:09:08 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.344 14:09:08 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.344 14:09:08 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:59.344 14:09:08 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:59.344 Unsupported workload type: foobar 00:06:59.344 [2024-07-10 14:09:08.646483] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:59.344 accel_perf options: 00:06:59.344 [-h help message] 00:06:59.344 [-q queue depth per core] 00:06:59.344 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:59.344 [-T number of threads per core 00:06:59.344 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:59.344 [-t time in seconds] 00:06:59.344 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:59.344 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:59.344 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:59.344 [-l for compress/decompress workloads, name of uncompressed input file 00:06:59.344 [-S for crc32c workload, use this seed value (default 0) 00:06:59.344 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:59.344 [-f for fill workload, use this BYTE value (default 255) 00:06:59.344 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:59.344 [-y verify result if this switch is on] 00:06:59.344 [-a tasks to allocate per core (default: same value as -q)] 00:06:59.344 Can be used to spread operations across a wider range of memory. 00:06:59.344 14:09:08 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:59.344 14:09:08 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.344 14:09:08 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:59.344 14:09:08 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.344 00:06:59.344 real 0m0.057s 00:06:59.344 user 0m0.069s 00:06:59.344 sys 0m0.026s 00:06:59.344 14:09:08 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.344 14:09:08 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:59.344 ************************************ 00:06:59.344 END TEST accel_wrong_workload 00:06:59.344 ************************************ 00:06:59.344 14:09:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.344 14:09:08 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:59.344 14:09:08 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:59.344 14:09:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.344 14:09:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.344 ************************************ 00:06:59.344 START TEST accel_negative_buffers 00:06:59.344 ************************************ 00:06:59.344 14:09:08 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:59.344 14:09:08 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:59.344 14:09:08 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:59.344 14:09:08 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:59.344 14:09:08 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.344 14:09:08 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:59.344 14:09:08 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.344 14:09:08 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:59.344 14:09:08 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:59.344 14:09:08 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:59.344 14:09:08 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.344 14:09:08 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.344 14:09:08 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.344 14:09:08 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.344 14:09:08 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.344 14:09:08 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:59.344 14:09:08 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:59.344 -x option must be non-negative. 00:06:59.344 [2024-07-10 14:09:08.746667] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:59.344 accel_perf options: 00:06:59.344 [-h help message] 00:06:59.344 [-q queue depth per core] 00:06:59.344 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:59.344 [-T number of threads per core 00:06:59.344 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:59.344 [-t time in seconds] 00:06:59.344 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:59.344 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:59.344 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:59.344 [-l for compress/decompress workloads, name of uncompressed input file 00:06:59.344 [-S for crc32c workload, use this seed value (default 0) 00:06:59.344 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:59.344 [-f for fill workload, use this BYTE value (default 255) 00:06:59.344 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:59.344 [-y verify result if this switch is on] 00:06:59.344 [-a tasks to allocate per core (default: same value as -q)] 00:06:59.344 Can be used to spread operations across a wider range of memory. 00:06:59.344 14:09:08 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:59.344 14:09:08 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.344 14:09:08 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:59.344 14:09:08 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.344 00:06:59.344 real 0m0.056s 00:06:59.344 user 0m0.061s 00:06:59.344 sys 0m0.031s 00:06:59.344 14:09:08 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.344 14:09:08 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:59.344 ************************************ 00:06:59.344 END TEST accel_negative_buffers 00:06:59.344 ************************************ 00:06:59.344 14:09:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.344 14:09:08 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:59.344 14:09:08 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:59.344 14:09:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.344 14:09:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.344 ************************************ 00:06:59.344 START TEST accel_crc32c 00:06:59.344 ************************************ 00:06:59.344 14:09:08 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:59.344 14:09:08 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:59.344 14:09:08 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:59.344 14:09:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.344 14:09:08 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:59.344 14:09:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.344 14:09:08 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:59.344 14:09:08 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:59.344 14:09:08 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.344 14:09:08 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.344 14:09:08 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.344 14:09:08 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.344 14:09:08 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.344 14:09:08 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:59.344 14:09:08 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:59.602 [2024-07-10 14:09:08.852808] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:06:59.602 [2024-07-10 14:09:08.852944] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258870 ] 00:06:59.602 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.602 [2024-07-10 14:09:08.983262] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.860 [2024-07-10 14:09:09.244631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.118 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.119 14:09:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.015 14:09:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.273 14:09:11 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.273 14:09:11 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:02.273 14:09:11 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.273 00:07:02.273 real 0m2.696s 00:07:02.273 user 0m2.455s 00:07:02.273 sys 0m0.238s 00:07:02.273 14:09:11 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.273 14:09:11 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:02.273 ************************************ 00:07:02.273 END TEST accel_crc32c 00:07:02.273 ************************************ 00:07:02.273 14:09:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.273 14:09:11 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:02.273 14:09:11 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:02.273 14:09:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.273 14:09:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.273 ************************************ 00:07:02.273 START TEST accel_crc32c_C2 00:07:02.273 ************************************ 00:07:02.273 14:09:11 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:02.273 14:09:11 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.273 14:09:11 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:02.273 14:09:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.273 14:09:11 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:02.273 14:09:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.273 14:09:11 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:02.273 14:09:11 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.273 14:09:11 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.273 14:09:11 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.273 14:09:11 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.273 14:09:11 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.273 14:09:11 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.273 14:09:11 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:02.273 14:09:11 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:02.273 [2024-07-10 14:09:11.592287] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:07:02.273 [2024-07-10 14:09:11.592421] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259304 ] 00:07:02.273 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.273 [2024-07-10 14:09:11.722739] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.531 [2024-07-10 14:09:11.984989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.790 14:09:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.316 00:07:05.316 real 0m2.685s 00:07:05.316 user 0m0.009s 00:07:05.316 sys 0m0.004s 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.316 14:09:14 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:05.316 ************************************ 00:07:05.316 END TEST accel_crc32c_C2 00:07:05.316 ************************************ 00:07:05.316 14:09:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.316 14:09:14 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:05.316 14:09:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:05.316 14:09:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.316 14:09:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.316 ************************************ 00:07:05.316 START TEST accel_copy 00:07:05.316 ************************************ 00:07:05.316 14:09:14 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:05.316 14:09:14 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:05.316 14:09:14 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:05.316 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 14:09:14 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:05.316 14:09:14 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:05.316 14:09:14 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:05.316 14:09:14 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.316 14:09:14 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.316 14:09:14 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.316 14:09:14 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.316 14:09:14 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.316 14:09:14 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:05.316 14:09:14 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:05.316 [2024-07-10 14:09:14.328169] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:07:05.316 [2024-07-10 14:09:14.328308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259693 ] 00:07:05.316 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.316 [2024-07-10 14:09:14.459511] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.316 [2024-07-10 14:09:14.718898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.575 14:09:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:08.104 14:09:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.104 00:07:08.104 real 0m2.688s 00:07:08.104 user 0m2.446s 00:07:08.104 sys 0m0.238s 00:07:08.104 14:09:16 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.104 14:09:16 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:08.104 ************************************ 00:07:08.104 END TEST accel_copy 00:07:08.104 ************************************ 00:07:08.104 14:09:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.104 14:09:16 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.104 14:09:16 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:08.104 14:09:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.104 14:09:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.104 ************************************ 00:07:08.104 START TEST accel_fill 00:07:08.104 ************************************ 00:07:08.104 14:09:17 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.104 14:09:17 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:08.104 14:09:17 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:08.104 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.104 14:09:17 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.104 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.104 14:09:17 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.104 14:09:17 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:08.104 14:09:17 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.104 14:09:17 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.104 14:09:17 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.104 14:09:17 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.104 14:09:17 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.104 14:09:17 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:08.104 14:09:17 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:08.104 [2024-07-10 14:09:17.062645] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:07:08.104 [2024-07-10 14:09:17.062794] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260010 ] 00:07:08.104 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.104 [2024-07-10 14:09:17.191038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.104 [2024-07-10 14:09:17.451122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.363 14:09:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:10.263 14:09:19 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.263 00:07:10.263 real 0m2.689s 00:07:10.263 user 0m2.442s 00:07:10.263 sys 0m0.244s 00:07:10.263 14:09:19 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.263 14:09:19 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:10.263 ************************************ 00:07:10.263 END TEST accel_fill 00:07:10.263 ************************************ 00:07:10.263 14:09:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.263 14:09:19 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:10.263 14:09:19 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:10.263 14:09:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.263 14:09:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.521 ************************************ 00:07:10.521 START TEST accel_copy_crc32c 00:07:10.521 ************************************ 00:07:10.521 14:09:19 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:10.521 14:09:19 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:10.521 14:09:19 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:10.521 14:09:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.521 14:09:19 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:10.521 14:09:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.521 14:09:19 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:10.521 14:09:19 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:10.521 14:09:19 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.521 14:09:19 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.521 14:09:19 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.521 14:09:19 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.521 14:09:19 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.522 14:09:19 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:10.522 14:09:19 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:10.522 [2024-07-10 14:09:19.803665] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:07:10.522 [2024-07-10 14:09:19.803800] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260300 ] 00:07:10.522 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.522 [2024-07-10 14:09:19.932394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.780 [2024-07-10 14:09:20.201712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.038 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.039 14:09:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.019 00:07:13.019 real 0m2.707s 00:07:13.019 user 0m0.012s 00:07:13.019 sys 0m0.002s 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.019 14:09:22 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:13.019 ************************************ 00:07:13.020 END TEST accel_copy_crc32c 00:07:13.020 ************************************ 00:07:13.020 14:09:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.020 14:09:22 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:13.020 14:09:22 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:13.020 14:09:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.020 14:09:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.278 ************************************ 00:07:13.278 START TEST accel_copy_crc32c_C2 00:07:13.278 ************************************ 00:07:13.278 14:09:22 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:13.278 14:09:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.278 14:09:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:13.278 14:09:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.278 14:09:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:13.278 14:09:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.278 14:09:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:13.278 14:09:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.278 14:09:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.278 14:09:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.278 14:09:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.278 14:09:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.278 14:09:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.278 14:09:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:13.278 14:09:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:13.278 [2024-07-10 14:09:22.561017] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:07:13.278 [2024-07-10 14:09:22.561143] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260711 ] 00:07:13.278 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.278 [2024-07-10 14:09:22.689981] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.536 [2024-07-10 14:09:22.952541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.794 14:09:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.325 00:07:16.325 real 0m2.705s 00:07:16.325 user 0m0.014s 00:07:16.325 sys 0m0.002s 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.325 14:09:25 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:16.325 ************************************ 00:07:16.325 END TEST accel_copy_crc32c_C2 00:07:16.325 ************************************ 00:07:16.325 14:09:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.325 14:09:25 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:16.325 14:09:25 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:16.325 14:09:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.325 14:09:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.325 ************************************ 00:07:16.325 START TEST accel_dualcast 00:07:16.325 ************************************ 00:07:16.325 14:09:25 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:16.325 14:09:25 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:16.325 14:09:25 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:16.325 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.325 14:09:25 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:16.325 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.325 14:09:25 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:16.325 14:09:25 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:16.325 14:09:25 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.325 14:09:25 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.325 14:09:25 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.325 14:09:25 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.325 14:09:25 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.325 14:09:25 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:16.325 14:09:25 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:16.325 [2024-07-10 14:09:25.314649] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:07:16.325 [2024-07-10 14:09:25.314798] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261003 ] 00:07:16.325 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.325 [2024-07-10 14:09:25.442579] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.325 [2024-07-10 14:09:25.700885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.584 14:09:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:18.485 14:09:27 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.485 00:07:18.485 real 0m2.686s 00:07:18.485 user 0m2.450s 00:07:18.485 sys 0m0.233s 00:07:18.485 14:09:27 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.485 14:09:27 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:18.485 ************************************ 00:07:18.485 END TEST accel_dualcast 00:07:18.485 ************************************ 00:07:18.743 14:09:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.743 14:09:27 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:18.743 14:09:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:18.743 14:09:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.743 14:09:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.744 ************************************ 00:07:18.744 START TEST accel_compare 00:07:18.744 ************************************ 00:07:18.744 14:09:28 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:18.744 14:09:28 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:18.744 14:09:28 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:18.744 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:18.744 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:18.744 14:09:28 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:18.744 14:09:28 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:18.744 14:09:28 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:18.744 14:09:28 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.744 14:09:28 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.744 14:09:28 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.744 14:09:28 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.744 14:09:28 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.744 14:09:28 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:18.744 14:09:28 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:18.744 [2024-07-10 14:09:28.043418] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:07:18.744 [2024-07-10 14:09:28.043589] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261412 ] 00:07:18.744 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.744 [2024-07-10 14:09:28.186181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.002 [2024-07-10 14:09:28.447996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.261 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.262 14:09:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.262 14:09:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.262 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.262 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.262 14:09:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.262 14:09:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.262 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.262 14:09:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:21.794 14:09:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.794 00:07:21.794 real 0m2.717s 00:07:21.794 user 0m0.009s 00:07:21.794 sys 0m0.003s 00:07:21.794 14:09:30 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.794 14:09:30 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:21.794 ************************************ 00:07:21.794 END TEST accel_compare 00:07:21.794 ************************************ 00:07:21.794 14:09:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.794 14:09:30 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:21.794 14:09:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:21.794 14:09:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.794 14:09:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.794 ************************************ 00:07:21.794 START TEST accel_xor 00:07:21.794 ************************************ 00:07:21.794 14:09:30 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:21.794 14:09:30 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:21.794 14:09:30 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:21.794 14:09:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.794 14:09:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.794 14:09:30 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:21.794 14:09:30 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:21.794 14:09:30 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:21.794 14:09:30 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.794 14:09:30 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.794 14:09:30 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.794 14:09:30 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.794 14:09:30 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.794 14:09:30 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:21.794 14:09:30 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:21.794 [2024-07-10 14:09:30.806070] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:07:21.794 [2024-07-10 14:09:30.806214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261704 ] 00:07:21.794 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.794 [2024-07-10 14:09:30.949234] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.794 [2024-07-10 14:09:31.210866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.053 14:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:23.991 14:09:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.991 00:07:23.991 real 0m2.705s 00:07:23.991 user 0m0.011s 00:07:23.991 sys 0m0.002s 00:07:23.991 14:09:33 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.991 14:09:33 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:23.991 ************************************ 00:07:23.991 END TEST accel_xor 00:07:23.991 ************************************ 00:07:24.248 14:09:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.248 14:09:33 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:24.248 14:09:33 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:24.248 14:09:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.248 14:09:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.248 ************************************ 00:07:24.248 START TEST accel_xor 00:07:24.248 ************************************ 00:07:24.248 14:09:33 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:24.248 14:09:33 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:24.248 14:09:33 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:24.248 14:09:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.248 14:09:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.248 14:09:33 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:24.248 14:09:33 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:24.248 14:09:33 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:24.248 14:09:33 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.248 14:09:33 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.248 14:09:33 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.248 14:09:33 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.248 14:09:33 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.248 14:09:33 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:24.248 14:09:33 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:24.248 [2024-07-10 14:09:33.553401] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:07:24.248 [2024-07-10 14:09:33.553650] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261999 ] 00:07:24.248 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.248 [2024-07-10 14:09:33.686924] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.505 [2024-07-10 14:09:33.948541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.762 14:09:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.762 14:09:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.762 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.762 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.762 14:09:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.762 14:09:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.762 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.762 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.762 14:09:34 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:24.762 14:09:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.762 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.762 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.762 14:09:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.762 14:09:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.762 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.762 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.762 14:09:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.763 14:09:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:27.288 14:09:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.288 00:07:27.288 real 0m2.692s 00:07:27.288 user 0m0.012s 00:07:27.288 sys 0m0.001s 00:07:27.288 14:09:36 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.288 14:09:36 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:27.288 ************************************ 00:07:27.288 END TEST accel_xor 00:07:27.288 ************************************ 00:07:27.288 14:09:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.288 14:09:36 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:27.288 14:09:36 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:27.288 14:09:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.288 14:09:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.288 ************************************ 00:07:27.288 START TEST accel_dif_verify 00:07:27.288 ************************************ 00:07:27.288 14:09:36 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:27.288 14:09:36 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:27.288 14:09:36 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:27.288 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.288 14:09:36 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:27.288 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.288 14:09:36 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:27.288 14:09:36 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:27.288 14:09:36 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.288 14:09:36 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.288 14:09:36 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.288 14:09:36 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.288 14:09:36 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.288 14:09:36 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:27.288 14:09:36 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:27.288 [2024-07-10 14:09:36.292634] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:07:27.288 [2024-07-10 14:09:36.292770] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262406 ] 00:07:27.288 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.288 [2024-07-10 14:09:36.436489] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.288 [2024-07-10 14:09:36.696355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.546 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.547 14:09:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.075 14:09:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:30.076 14:09:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.076 00:07:30.076 real 0m2.699s 00:07:30.076 user 0m0.010s 00:07:30.076 sys 0m0.003s 00:07:30.076 14:09:38 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.076 14:09:38 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:30.076 ************************************ 00:07:30.076 END TEST accel_dif_verify 00:07:30.076 ************************************ 00:07:30.076 14:09:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.076 14:09:38 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:30.076 14:09:38 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:30.076 14:09:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.076 14:09:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.076 ************************************ 00:07:30.076 START TEST accel_dif_generate 00:07:30.076 ************************************ 00:07:30.076 14:09:38 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:30.076 14:09:38 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:30.076 14:09:38 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:30.076 14:09:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.076 14:09:38 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:30.076 14:09:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.076 14:09:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:30.076 14:09:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:30.076 14:09:38 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.076 14:09:38 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.076 14:09:38 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.076 14:09:38 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.076 14:09:38 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.076 14:09:38 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:30.076 14:09:38 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:30.076 [2024-07-10 14:09:39.040053] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:07:30.076 [2024-07-10 14:09:39.040186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262696 ] 00:07:30.076 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.076 [2024-07-10 14:09:39.169604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.076 [2024-07-10 14:09:39.430996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.334 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.335 14:09:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:32.235 14:09:41 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.235 00:07:32.235 real 0m2.697s 00:07:32.235 user 0m0.011s 00:07:32.235 sys 0m0.003s 00:07:32.235 14:09:41 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.235 14:09:41 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:32.235 ************************************ 00:07:32.235 END TEST accel_dif_generate 00:07:32.235 ************************************ 00:07:32.235 14:09:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:32.493 14:09:41 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:32.493 14:09:41 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:32.493 14:09:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.493 14:09:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.493 ************************************ 00:07:32.493 START TEST accel_dif_generate_copy 00:07:32.493 ************************************ 00:07:32.493 14:09:41 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:32.493 14:09:41 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:32.493 14:09:41 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:32.493 14:09:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.493 14:09:41 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:32.493 14:09:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.493 14:09:41 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:32.493 14:09:41 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:32.493 14:09:41 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.493 14:09:41 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.493 14:09:41 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.493 14:09:41 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.493 14:09:41 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.493 14:09:41 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:32.493 14:09:41 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:32.493 [2024-07-10 14:09:41.783460] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:07:32.493 [2024-07-10 14:09:41.783608] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263109 ] 00:07:32.493 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.493 [2024-07-10 14:09:41.928347] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.751 [2024-07-10 14:09:42.190453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.009 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.010 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:33.010 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.010 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.010 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.010 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:33.010 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.010 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.010 14:09:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.538 00:07:35.538 real 0m2.711s 00:07:35.538 user 0m2.453s 00:07:35.538 sys 0m0.255s 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.538 14:09:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:35.538 ************************************ 00:07:35.538 END TEST accel_dif_generate_copy 00:07:35.538 ************************************ 00:07:35.538 14:09:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.539 14:09:44 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:35.539 14:09:44 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.539 14:09:44 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:35.539 14:09:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.539 14:09:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.539 ************************************ 00:07:35.539 START TEST accel_comp 00:07:35.539 ************************************ 00:07:35.539 14:09:44 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.539 14:09:44 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:35.539 14:09:44 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:35.539 14:09:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.539 14:09:44 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.539 14:09:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.539 14:09:44 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.539 14:09:44 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:35.539 14:09:44 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.539 14:09:44 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.539 14:09:44 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.539 14:09:44 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.539 14:09:44 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.539 14:09:44 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:35.539 14:09:44 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:35.539 [2024-07-10 14:09:44.539497] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:07:35.539 [2024-07-10 14:09:44.539649] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263412 ] 00:07:35.539 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.539 [2024-07-10 14:09:44.684049] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.539 [2024-07-10 14:09:44.944753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.798 14:09:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:38.328 14:09:47 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.328 00:07:38.328 real 0m2.708s 00:07:38.328 user 0m0.011s 00:07:38.328 sys 0m0.002s 00:07:38.328 14:09:47 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.328 14:09:47 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:38.328 ************************************ 00:07:38.328 END TEST accel_comp 00:07:38.328 ************************************ 00:07:38.328 14:09:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.328 14:09:47 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.328 14:09:47 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:38.328 14:09:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.328 14:09:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.328 ************************************ 00:07:38.328 START TEST accel_decomp 00:07:38.328 ************************************ 00:07:38.328 14:09:47 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.328 14:09:47 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:38.328 14:09:47 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:38.328 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.328 14:09:47 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.328 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.328 14:09:47 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.328 14:09:47 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:38.328 14:09:47 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.328 14:09:47 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.328 14:09:47 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.328 14:09:47 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.328 14:09:47 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.328 14:09:47 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:38.328 14:09:47 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:38.328 [2024-07-10 14:09:47.290481] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:07:38.328 [2024-07-10 14:09:47.290629] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263814 ] 00:07:38.328 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.328 [2024-07-10 14:09:47.434367] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.328 [2024-07-10 14:09:47.693356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.584 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.585 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.585 14:09:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:38.585 14:09:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.585 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.585 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.585 14:09:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:38.585 14:09:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.585 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.585 14:09:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:40.482 14:09:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.482 00:07:40.482 real 0m2.708s 00:07:40.482 user 0m2.454s 00:07:40.482 sys 0m0.253s 00:07:40.482 14:09:49 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.482 14:09:49 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:40.482 ************************************ 00:07:40.482 END TEST accel_decomp 00:07:40.482 ************************************ 00:07:40.740 14:09:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.740 14:09:49 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:40.740 14:09:49 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:40.740 14:09:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.740 14:09:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.740 ************************************ 00:07:40.740 START TEST accel_decomp_full 00:07:40.740 ************************************ 00:07:40.740 14:09:49 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:40.740 14:09:49 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:40.740 14:09:50 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:40.740 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:40.740 14:09:50 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:40.740 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:40.740 14:09:50 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:40.740 14:09:50 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:40.740 14:09:50 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.740 14:09:50 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.740 14:09:50 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.740 14:09:50 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.740 14:09:50 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.740 14:09:50 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:40.740 14:09:50 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:40.740 [2024-07-10 14:09:50.044993] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:07:40.740 [2024-07-10 14:09:50.045118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264109 ] 00:07:40.740 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.740 [2024-07-10 14:09:50.174994] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.997 [2024-07-10 14:09:50.435263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.255 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.256 14:09:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:41.256 14:09:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.256 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.256 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.256 14:09:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:41.256 14:09:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:41.256 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.256 14:09:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:43.779 14:09:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.779 00:07:43.779 real 0m2.712s 00:07:43.779 user 0m2.487s 00:07:43.779 sys 0m0.223s 00:07:43.779 14:09:52 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.779 14:09:52 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:43.779 ************************************ 00:07:43.779 END TEST accel_decomp_full 00:07:43.779 ************************************ 00:07:43.779 14:09:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:43.779 14:09:52 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:43.779 14:09:52 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:43.779 14:09:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.779 14:09:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.779 ************************************ 00:07:43.779 START TEST accel_decomp_mcore 00:07:43.779 ************************************ 00:07:43.779 14:09:52 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:43.779 14:09:52 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:43.779 14:09:52 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:43.779 14:09:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.779 14:09:52 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:43.779 14:09:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:43.779 14:09:52 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:43.779 14:09:52 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:43.779 14:09:52 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.779 14:09:52 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.779 14:09:52 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.779 14:09:52 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.779 14:09:52 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.779 14:09:52 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:43.779 14:09:52 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:43.779 [2024-07-10 14:09:52.799759] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:07:43.779 [2024-07-10 14:09:52.799876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264405 ] 00:07:43.779 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.779 [2024-07-10 14:09:52.929905] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.779 [2024-07-10 14:09:53.196894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.779 [2024-07-10 14:09:53.196948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.779 [2024-07-10 14:09:53.196990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.779 [2024-07-10 14:09:53.197002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.038 14:09:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.568 00:07:46.568 real 0m2.729s 00:07:46.568 user 0m0.010s 00:07:46.568 sys 0m0.005s 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.568 14:09:55 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:46.568 ************************************ 00:07:46.568 END TEST accel_decomp_mcore 00:07:46.568 ************************************ 00:07:46.568 14:09:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:46.568 14:09:55 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.568 14:09:55 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:46.568 14:09:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.568 14:09:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.568 ************************************ 00:07:46.568 START TEST accel_decomp_full_mcore 00:07:46.568 ************************************ 00:07:46.568 14:09:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.568 14:09:55 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:46.568 14:09:55 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:46.568 14:09:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.568 14:09:55 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.568 14:09:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.568 14:09:55 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.568 14:09:55 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:46.568 14:09:55 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.568 14:09:55 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.568 14:09:55 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.568 14:09:55 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.568 14:09:55 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.568 14:09:55 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:46.568 14:09:55 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:46.568 [2024-07-10 14:09:55.577011] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:07:46.568 [2024-07-10 14:09:55.577156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264819 ] 00:07:46.568 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.568 [2024-07-10 14:09:55.722130] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.568 [2024-07-10 14:09:55.990696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.568 [2024-07-10 14:09:55.990750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.568 [2024-07-10 14:09:55.990796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.568 [2024-07-10 14:09:55.990808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:46.827 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.828 14:09:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.412 00:07:49.412 real 0m2.785s 00:07:49.412 user 0m0.015s 00:07:49.412 sys 0m0.001s 00:07:49.412 14:09:58 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.413 14:09:58 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:49.413 ************************************ 00:07:49.413 END TEST accel_decomp_full_mcore 00:07:49.413 ************************************ 00:07:49.413 14:09:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.413 14:09:58 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:49.413 14:09:58 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:49.413 14:09:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.413 14:09:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.413 ************************************ 00:07:49.413 START TEST accel_decomp_mthread 00:07:49.413 ************************************ 00:07:49.413 14:09:58 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:49.413 14:09:58 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:49.413 14:09:58 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:49.413 14:09:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.413 14:09:58 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:49.413 14:09:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.413 14:09:58 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:49.413 14:09:58 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:49.413 14:09:58 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.413 14:09:58 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.413 14:09:58 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.413 14:09:58 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.413 14:09:58 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.413 14:09:58 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:49.413 14:09:58 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:49.413 [2024-07-10 14:09:58.408581] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:07:49.413 [2024-07-10 14:09:58.408724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265111 ] 00:07:49.413 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.413 [2024-07-10 14:09:58.539388] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.413 [2024-07-10 14:09:58.803966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.672 14:09:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.572 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.830 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.830 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:51.830 14:10:01 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.830 00:07:51.830 real 0m2.690s 00:07:51.830 user 0m2.442s 00:07:51.830 sys 0m0.246s 00:07:51.830 14:10:01 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.830 14:10:01 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:51.830 ************************************ 00:07:51.830 END TEST accel_decomp_mthread 00:07:51.830 ************************************ 00:07:51.830 14:10:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:51.830 14:10:01 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:51.830 14:10:01 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:51.830 14:10:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.830 14:10:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:51.830 ************************************ 00:07:51.830 START TEST accel_decomp_full_mthread 00:07:51.830 ************************************ 00:07:51.830 14:10:01 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:51.830 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:51.830 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:51.830 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.830 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:51.830 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.830 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:51.830 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:51.830 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.830 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.830 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.830 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.830 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.830 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:51.830 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:51.830 [2024-07-10 14:10:01.146366] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:07:51.830 [2024-07-10 14:10:01.146523] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265520 ] 00:07:51.830 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.830 [2024-07-10 14:10:01.290595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.088 [2024-07-10 14:10:01.556133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 14:10:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.871 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:54.871 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.871 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.871 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.871 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:54.871 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.871 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.871 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.871 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:54.871 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.871 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.871 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.871 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:54.871 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.872 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.872 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.872 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:54.872 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.872 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.872 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.872 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:54.872 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.872 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.872 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.872 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:54.872 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.872 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.872 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.872 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.872 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:54.872 14:10:03 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.872 00:07:54.872 real 0m2.762s 00:07:54.872 user 0m2.501s 00:07:54.872 sys 0m0.260s 00:07:54.872 14:10:03 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.872 14:10:03 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:54.872 ************************************ 00:07:54.872 END TEST accel_decomp_full_mthread 00:07:54.872 ************************************ 00:07:54.872 14:10:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:54.872 14:10:03 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:54.872 14:10:03 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:54.872 14:10:03 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:54.872 14:10:03 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:54.872 14:10:03 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.872 14:10:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.872 14:10:03 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.872 14:10:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:54.872 14:10:03 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.872 14:10:03 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.872 14:10:03 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.872 14:10:03 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:54.872 14:10:03 accel -- accel/accel.sh@41 -- # jq -r . 00:07:54.872 ************************************ 00:07:54.872 START TEST accel_dif_functional_tests 00:07:54.872 ************************************ 00:07:54.872 14:10:03 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:54.872 [2024-07-10 14:10:03.991466] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:07:54.872 [2024-07-10 14:10:03.991616] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265819 ] 00:07:54.872 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.872 [2024-07-10 14:10:04.121876] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:55.130 [2024-07-10 14:10:04.388448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.130 [2024-07-10 14:10:04.388481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.130 [2024-07-10 14:10:04.388491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.388 00:07:55.388 00:07:55.388 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.388 http://cunit.sourceforge.net/ 00:07:55.388 00:07:55.388 00:07:55.388 Suite: accel_dif 00:07:55.388 Test: verify: DIF generated, GUARD check ...passed 00:07:55.388 Test: verify: DIF generated, APPTAG check ...passed 00:07:55.388 Test: verify: DIF generated, REFTAG check ...passed 00:07:55.388 Test: verify: DIF not generated, GUARD check ...[2024-07-10 14:10:04.749503] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:55.388 passed 00:07:55.388 Test: verify: DIF not generated, APPTAG check ...[2024-07-10 14:10:04.749624] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:55.388 passed 00:07:55.388 Test: verify: DIF not generated, REFTAG check ...[2024-07-10 14:10:04.749693] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:55.388 passed 00:07:55.388 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:55.388 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-10 14:10:04.749826] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:55.388 passed 00:07:55.388 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:55.388 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:55.388 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:55.388 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-10 14:10:04.750096] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:55.388 passed 00:07:55.388 Test: verify copy: DIF generated, GUARD check ...passed 00:07:55.388 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:55.388 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:55.388 Test: verify copy: DIF not generated, GUARD check ...[2024-07-10 14:10:04.750402] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:55.388 passed 00:07:55.388 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-10 14:10:04.750511] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:55.388 passed 00:07:55.388 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-10 14:10:04.750599] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:55.388 passed 00:07:55.388 Test: generate copy: DIF generated, GUARD check ...passed 00:07:55.388 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:55.388 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:55.388 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:55.388 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:55.388 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:55.388 Test: generate copy: iovecs-len validate ...[2024-07-10 14:10:04.751103] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:55.388 passed 00:07:55.388 Test: generate copy: buffer alignment validate ...passed 00:07:55.388 00:07:55.388 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.388 suites 1 1 n/a 0 0 00:07:55.388 tests 26 26 26 0 0 00:07:55.388 asserts 115 115 115 0 n/a 00:07:55.388 00:07:55.388 Elapsed time = 0.005 seconds 00:07:56.762 00:07:56.762 real 0m2.182s 00:07:56.762 user 0m4.269s 00:07:56.762 sys 0m0.333s 00:07:56.762 14:10:06 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.762 14:10:06 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:56.762 ************************************ 00:07:56.762 END TEST accel_dif_functional_tests 00:07:56.762 ************************************ 00:07:56.762 14:10:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:56.762 00:07:56.762 real 1m5.099s 00:07:56.762 user 1m11.952s 00:07:56.762 sys 0m7.328s 00:07:56.762 14:10:06 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.762 14:10:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.762 ************************************ 00:07:56.762 END TEST accel 00:07:56.762 ************************************ 00:07:56.762 14:10:06 -- common/autotest_common.sh@1142 -- # return 0 00:07:56.762 14:10:06 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:56.762 14:10:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:56.762 14:10:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.762 14:10:06 -- common/autotest_common.sh@10 -- # set +x 00:07:56.762 ************************************ 00:07:56.762 START TEST accel_rpc 00:07:56.762 ************************************ 00:07:56.762 14:10:06 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:56.762 * Looking for test storage... 00:07:56.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:56.762 14:10:06 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:56.762 14:10:06 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1266145 00:07:56.762 14:10:06 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:56.762 14:10:06 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1266145 00:07:56.762 14:10:06 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1266145 ']' 00:07:56.762 14:10:06 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.762 14:10:06 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.762 14:10:06 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.762 14:10:06 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.762 14:10:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.021 [2024-07-10 14:10:06.313630] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:07:57.021 [2024-07-10 14:10:06.313806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1266145 ] 00:07:57.021 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.021 [2024-07-10 14:10:06.434389] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.278 [2024-07-10 14:10:06.694121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.845 14:10:07 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:57.845 14:10:07 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:57.845 14:10:07 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:57.845 14:10:07 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:57.845 14:10:07 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:57.845 14:10:07 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:57.845 14:10:07 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:57.845 14:10:07 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:57.845 14:10:07 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.845 14:10:07 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.845 ************************************ 00:07:57.845 START TEST accel_assign_opcode 00:07:57.845 ************************************ 00:07:57.845 14:10:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:57.845 14:10:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:57.845 14:10:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.845 14:10:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:57.845 [2024-07-10 14:10:07.296622] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:57.845 14:10:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.845 14:10:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:57.845 14:10:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.845 14:10:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:57.845 [2024-07-10 14:10:07.304618] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:57.845 14:10:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.845 14:10:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:57.845 14:10:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.845 14:10:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.779 14:10:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.780 14:10:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:58.780 14:10:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.780 14:10:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.780 14:10:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:58.780 14:10:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:58.780 14:10:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.780 software 00:07:58.780 00:07:58.780 real 0m0.933s 00:07:58.780 user 0m0.040s 00:07:58.780 sys 0m0.009s 00:07:58.780 14:10:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.780 14:10:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.780 ************************************ 00:07:58.780 END TEST accel_assign_opcode 00:07:58.780 ************************************ 00:07:58.780 14:10:08 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:58.780 14:10:08 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1266145 00:07:58.780 14:10:08 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1266145 ']' 00:07:58.780 14:10:08 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1266145 00:07:58.780 14:10:08 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:58.780 14:10:08 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:58.780 14:10:08 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1266145 00:07:59.038 14:10:08 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:59.038 14:10:08 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:59.038 14:10:08 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1266145' 00:07:59.038 killing process with pid 1266145 00:07:59.038 14:10:08 accel_rpc -- common/autotest_common.sh@967 -- # kill 1266145 00:07:59.038 14:10:08 accel_rpc -- common/autotest_common.sh@972 -- # wait 1266145 00:08:01.567 00:08:01.567 real 0m4.668s 00:08:01.567 user 0m4.645s 00:08:01.567 sys 0m0.658s 00:08:01.567 14:10:10 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.567 14:10:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.567 ************************************ 00:08:01.567 END TEST accel_rpc 00:08:01.567 ************************************ 00:08:01.567 14:10:10 -- common/autotest_common.sh@1142 -- # return 0 00:08:01.567 14:10:10 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:01.567 14:10:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:01.567 14:10:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.567 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:01.567 ************************************ 00:08:01.567 START TEST app_cmdline 00:08:01.567 ************************************ 00:08:01.567 14:10:10 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:01.567 * Looking for test storage... 00:08:01.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:01.567 14:10:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:01.567 14:10:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1266758 00:08:01.567 14:10:10 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:01.567 14:10:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1266758 00:08:01.567 14:10:10 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1266758 ']' 00:08:01.567 14:10:10 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.567 14:10:10 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:01.567 14:10:10 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.567 14:10:10 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:01.567 14:10:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:01.567 [2024-07-10 14:10:11.040966] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:08:01.567 [2024-07-10 14:10:11.041118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1266758 ] 00:08:01.825 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.825 [2024-07-10 14:10:11.163948] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.082 [2024-07-10 14:10:11.416094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.015 14:10:12 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:03.015 14:10:12 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:03.015 14:10:12 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:03.272 { 00:08:03.272 "version": "SPDK v24.09-pre git sha1 968224f46", 00:08:03.272 "fields": { 00:08:03.273 "major": 24, 00:08:03.273 "minor": 9, 00:08:03.273 "patch": 0, 00:08:03.273 "suffix": "-pre", 00:08:03.273 "commit": "968224f46" 00:08:03.273 } 00:08:03.273 } 00:08:03.273 14:10:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:03.273 14:10:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:03.273 14:10:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:03.273 14:10:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:03.273 14:10:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:03.273 14:10:12 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.273 14:10:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:03.273 14:10:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:03.273 14:10:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:03.273 14:10:12 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.273 14:10:12 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:03.273 14:10:12 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:03.273 14:10:12 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:03.273 14:10:12 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:03.273 14:10:12 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:03.273 14:10:12 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:03.273 14:10:12 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.273 14:10:12 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:03.273 14:10:12 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.273 14:10:12 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:03.273 14:10:12 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.273 14:10:12 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:03.273 14:10:12 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:03.273 14:10:12 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:03.531 request: 00:08:03.531 { 00:08:03.531 "method": "env_dpdk_get_mem_stats", 00:08:03.531 "req_id": 1 00:08:03.531 } 00:08:03.531 Got JSON-RPC error response 00:08:03.531 response: 00:08:03.531 { 00:08:03.531 "code": -32601, 00:08:03.531 "message": "Method not found" 00:08:03.531 } 00:08:03.531 14:10:12 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:03.531 14:10:12 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:03.531 14:10:12 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:03.531 14:10:12 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:03.531 14:10:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1266758 00:08:03.531 14:10:12 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1266758 ']' 00:08:03.531 14:10:12 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1266758 00:08:03.531 14:10:12 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:03.531 14:10:12 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:03.531 14:10:12 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1266758 00:08:03.531 14:10:12 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:03.531 14:10:12 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:03.531 14:10:12 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1266758' 00:08:03.531 killing process with pid 1266758 00:08:03.531 14:10:12 app_cmdline -- common/autotest_common.sh@967 -- # kill 1266758 00:08:03.531 14:10:12 app_cmdline -- common/autotest_common.sh@972 -- # wait 1266758 00:08:06.065 00:08:06.065 real 0m4.518s 00:08:06.065 user 0m4.918s 00:08:06.065 sys 0m0.663s 00:08:06.065 14:10:15 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.065 14:10:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 ************************************ 00:08:06.065 END TEST app_cmdline 00:08:06.065 ************************************ 00:08:06.065 14:10:15 -- common/autotest_common.sh@1142 -- # return 0 00:08:06.065 14:10:15 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:06.065 14:10:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:06.065 14:10:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.065 14:10:15 -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 ************************************ 00:08:06.065 START TEST version 00:08:06.065 ************************************ 00:08:06.065 14:10:15 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:06.065 * Looking for test storage... 00:08:06.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:06.065 14:10:15 version -- app/version.sh@17 -- # get_header_version major 00:08:06.065 14:10:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:06.065 14:10:15 version -- app/version.sh@14 -- # cut -f2 00:08:06.065 14:10:15 version -- app/version.sh@14 -- # tr -d '"' 00:08:06.065 14:10:15 version -- app/version.sh@17 -- # major=24 00:08:06.065 14:10:15 version -- app/version.sh@18 -- # get_header_version minor 00:08:06.065 14:10:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:06.065 14:10:15 version -- app/version.sh@14 -- # cut -f2 00:08:06.065 14:10:15 version -- app/version.sh@14 -- # tr -d '"' 00:08:06.065 14:10:15 version -- app/version.sh@18 -- # minor=9 00:08:06.065 14:10:15 version -- app/version.sh@19 -- # get_header_version patch 00:08:06.065 14:10:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:06.065 14:10:15 version -- app/version.sh@14 -- # cut -f2 00:08:06.065 14:10:15 version -- app/version.sh@14 -- # tr -d '"' 00:08:06.065 14:10:15 version -- app/version.sh@19 -- # patch=0 00:08:06.065 14:10:15 version -- app/version.sh@20 -- # get_header_version suffix 00:08:06.065 14:10:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:06.065 14:10:15 version -- app/version.sh@14 -- # cut -f2 00:08:06.065 14:10:15 version -- app/version.sh@14 -- # tr -d '"' 00:08:06.065 14:10:15 version -- app/version.sh@20 -- # suffix=-pre 00:08:06.065 14:10:15 version -- app/version.sh@22 -- # version=24.9 00:08:06.065 14:10:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:06.066 14:10:15 version -- app/version.sh@28 -- # version=24.9rc0 00:08:06.066 14:10:15 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:06.066 14:10:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:06.324 14:10:15 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:06.324 14:10:15 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:06.324 00:08:06.324 real 0m0.103s 00:08:06.325 user 0m0.056s 00:08:06.325 sys 0m0.069s 00:08:06.325 14:10:15 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.325 14:10:15 version -- common/autotest_common.sh@10 -- # set +x 00:08:06.325 ************************************ 00:08:06.325 END TEST version 00:08:06.325 ************************************ 00:08:06.325 14:10:15 -- common/autotest_common.sh@1142 -- # return 0 00:08:06.325 14:10:15 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:06.325 14:10:15 -- spdk/autotest.sh@198 -- # uname -s 00:08:06.325 14:10:15 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:06.325 14:10:15 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:06.325 14:10:15 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:06.325 14:10:15 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:06.325 14:10:15 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:06.325 14:10:15 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:06.325 14:10:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:06.325 14:10:15 -- common/autotest_common.sh@10 -- # set +x 00:08:06.325 14:10:15 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:06.325 14:10:15 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:06.325 14:10:15 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:06.325 14:10:15 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:06.325 14:10:15 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:06.325 14:10:15 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:06.325 14:10:15 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:06.325 14:10:15 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:06.325 14:10:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.325 14:10:15 -- common/autotest_common.sh@10 -- # set +x 00:08:06.325 ************************************ 00:08:06.325 START TEST nvmf_tcp 00:08:06.325 ************************************ 00:08:06.325 14:10:15 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:06.325 * Looking for test storage... 00:08:06.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.325 14:10:15 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.325 14:10:15 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.325 14:10:15 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.325 14:10:15 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.325 14:10:15 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.325 14:10:15 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.325 14:10:15 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:06.325 14:10:15 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:06.325 14:10:15 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:06.325 14:10:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:06.325 14:10:15 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:06.325 14:10:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:06.325 14:10:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.325 14:10:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:06.325 ************************************ 00:08:06.325 START TEST nvmf_example 00:08:06.325 ************************************ 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:06.325 * Looking for test storage... 00:08:06.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.325 14:10:15 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:06.326 14:10:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:08.227 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:08.227 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:08.227 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:08.227 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.227 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.228 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:08.228 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.228 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.228 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:08.228 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:08.228 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.228 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.228 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.228 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.228 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:08.228 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.487 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.487 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.487 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:08.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:08:08.487 00:08:08.487 --- 10.0.0.2 ping statistics --- 00:08:08.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.487 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:08:08.487 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:08:08.487 00:08:08.488 --- 10.0.0.1 ping statistics --- 00:08:08.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.488 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1269071 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1269071 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1269071 ']' 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.488 14:10:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:08.488 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.423 14:10:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:09.423 14:10:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:08:09.423 14:10:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:09.423 14:10:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:09.423 14:10:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:09.423 14:10:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:09.423 14:10:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.423 14:10:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:09.423 14:10:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.423 14:10:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:09.423 14:10:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.423 14:10:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:09.681 14:10:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.681 14:10:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:09.681 14:10:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:09.681 14:10:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.681 14:10:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:09.681 14:10:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.681 14:10:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:09.681 14:10:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:09.681 14:10:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.681 14:10:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:09.681 14:10:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.681 14:10:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.681 14:10:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.681 14:10:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:09.681 14:10:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.681 14:10:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:09.681 14:10:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:09.681 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.877 Initializing NVMe Controllers 00:08:21.877 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:21.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:21.877 Initialization complete. Launching workers. 00:08:21.877 ======================================================== 00:08:21.877 Latency(us) 00:08:21.877 Device Information : IOPS MiB/s Average min max 00:08:21.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12011.30 46.92 5328.05 1250.78 15695.64 00:08:21.877 ======================================================== 00:08:21.877 Total : 12011.30 46.92 5328.05 1250.78 15695.64 00:08:21.877 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:21.877 rmmod nvme_tcp 00:08:21.877 rmmod nvme_fabrics 00:08:21.877 rmmod nvme_keyring 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1269071 ']' 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1269071 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1269071 ']' 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1269071 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1269071 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1269071' 00:08:21.877 killing process with pid 1269071 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1269071 00:08:21.877 14:10:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1269071 00:08:21.877 nvmf threads initialize successfully 00:08:21.877 bdev subsystem init successfully 00:08:21.877 created a nvmf target service 00:08:21.877 create targets's poll groups done 00:08:21.877 all subsystems of target started 00:08:21.877 nvmf target is running 00:08:21.877 all subsystems of target stopped 00:08:21.877 destroy targets's poll groups done 00:08:21.877 destroyed the nvmf target service 00:08:21.877 bdev subsystem finish successfully 00:08:21.877 nvmf threads destroy successfully 00:08:21.877 14:10:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:21.877 14:10:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:21.877 14:10:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:21.877 14:10:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:21.877 14:10:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:21.877 14:10:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.877 14:10:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.877 14:10:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.256 14:10:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:23.256 14:10:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:23.256 14:10:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:23.256 14:10:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:23.517 00:08:23.517 real 0m17.014s 00:08:23.517 user 0m48.450s 00:08:23.517 sys 0m3.136s 00:08:23.517 14:10:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.517 14:10:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:23.517 ************************************ 00:08:23.517 END TEST nvmf_example 00:08:23.517 ************************************ 00:08:23.517 14:10:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:23.517 14:10:32 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:23.517 14:10:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:23.517 14:10:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.517 14:10:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:23.517 ************************************ 00:08:23.517 START TEST nvmf_filesystem 00:08:23.517 ************************************ 00:08:23.517 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:23.517 * Looking for test storage... 00:08:23.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.517 14:10:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:23.517 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:23.517 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:23.517 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:23.517 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:23.517 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:23.517 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:23.517 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:23.517 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:23.517 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:23.517 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:23.518 14:10:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:23.518 #define SPDK_CONFIG_H 00:08:23.518 #define SPDK_CONFIG_APPS 1 00:08:23.518 #define SPDK_CONFIG_ARCH native 00:08:23.518 #define SPDK_CONFIG_ASAN 1 00:08:23.518 #undef SPDK_CONFIG_AVAHI 00:08:23.518 #undef SPDK_CONFIG_CET 00:08:23.518 #define SPDK_CONFIG_COVERAGE 1 00:08:23.518 #define SPDK_CONFIG_CROSS_PREFIX 00:08:23.518 #undef SPDK_CONFIG_CRYPTO 00:08:23.518 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:23.518 #undef SPDK_CONFIG_CUSTOMOCF 00:08:23.518 #undef SPDK_CONFIG_DAOS 00:08:23.518 #define SPDK_CONFIG_DAOS_DIR 00:08:23.518 #define SPDK_CONFIG_DEBUG 1 00:08:23.518 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:23.518 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:23.518 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:23.518 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:23.518 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:23.518 #undef SPDK_CONFIG_DPDK_UADK 00:08:23.518 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:23.518 #define SPDK_CONFIG_EXAMPLES 1 00:08:23.518 #undef SPDK_CONFIG_FC 00:08:23.518 #define SPDK_CONFIG_FC_PATH 00:08:23.518 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:23.518 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:23.518 #undef SPDK_CONFIG_FUSE 00:08:23.518 #undef SPDK_CONFIG_FUZZER 00:08:23.518 #define SPDK_CONFIG_FUZZER_LIB 00:08:23.518 #undef SPDK_CONFIG_GOLANG 00:08:23.518 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:23.518 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:23.518 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:23.518 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:23.518 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:23.518 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:23.519 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:23.519 #define SPDK_CONFIG_IDXD 1 00:08:23.519 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:23.519 #undef SPDK_CONFIG_IPSEC_MB 00:08:23.519 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:23.519 #define SPDK_CONFIG_ISAL 1 00:08:23.519 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:23.519 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:23.519 #define SPDK_CONFIG_LIBDIR 00:08:23.519 #undef SPDK_CONFIG_LTO 00:08:23.519 #define SPDK_CONFIG_MAX_LCORES 128 00:08:23.519 #define SPDK_CONFIG_NVME_CUSE 1 00:08:23.519 #undef SPDK_CONFIG_OCF 00:08:23.519 #define SPDK_CONFIG_OCF_PATH 00:08:23.519 #define SPDK_CONFIG_OPENSSL_PATH 00:08:23.519 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:23.519 #define SPDK_CONFIG_PGO_DIR 00:08:23.519 #undef SPDK_CONFIG_PGO_USE 00:08:23.519 #define SPDK_CONFIG_PREFIX /usr/local 00:08:23.519 #undef SPDK_CONFIG_RAID5F 00:08:23.519 #undef SPDK_CONFIG_RBD 00:08:23.519 #define SPDK_CONFIG_RDMA 1 00:08:23.519 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:23.519 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:23.519 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:23.519 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:23.519 #define SPDK_CONFIG_SHARED 1 00:08:23.519 #undef SPDK_CONFIG_SMA 00:08:23.519 #define SPDK_CONFIG_TESTS 1 00:08:23.519 #undef SPDK_CONFIG_TSAN 00:08:23.519 #define SPDK_CONFIG_UBLK 1 00:08:23.519 #define SPDK_CONFIG_UBSAN 1 00:08:23.519 #undef SPDK_CONFIG_UNIT_TESTS 00:08:23.519 #undef SPDK_CONFIG_URING 00:08:23.519 #define SPDK_CONFIG_URING_PATH 00:08:23.519 #undef SPDK_CONFIG_URING_ZNS 00:08:23.519 #undef SPDK_CONFIG_USDT 00:08:23.519 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:23.519 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:23.519 #undef SPDK_CONFIG_VFIO_USER 00:08:23.519 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:23.519 #define SPDK_CONFIG_VHOST 1 00:08:23.519 #define SPDK_CONFIG_VIRTIO 1 00:08:23.519 #undef SPDK_CONFIG_VTUNE 00:08:23.519 #define SPDK_CONFIG_VTUNE_DIR 00:08:23.519 #define SPDK_CONFIG_WERROR 1 00:08:23.519 #define SPDK_CONFIG_WPDK_DIR 00:08:23.519 #undef SPDK_CONFIG_XNVME 00:08:23.519 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:23.519 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 1 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:23.520 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1270997 ]] 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1270997 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.5NL8Nt 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.5NL8Nt/tests/target /tmp/spdk.5NL8Nt 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=55269359616 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994737664 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6725378048 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941732864 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997368832 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390187008 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398948352 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996385792 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997368832 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=983040 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199468032 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199472128 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:23.521 * Looking for test storage... 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=55269359616 00:08:23.521 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8939970560 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:23.522 14:10:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:25.423 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:25.423 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:25.423 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:25.423 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.423 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.682 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.682 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.682 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.682 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.682 14:10:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:08:25.682 00:08:25.682 --- 10.0.0.2 ping statistics --- 00:08:25.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.682 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:08:25.682 00:08:25.682 --- 10.0.0.1 ping statistics --- 00:08:25.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.682 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.682 ************************************ 00:08:25.682 START TEST nvmf_filesystem_no_in_capsule 00:08:25.682 ************************************ 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1272626 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1272626 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1272626 ']' 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.682 14:10:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:25.940 [2024-07-10 14:10:35.172093] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:08:25.940 [2024-07-10 14:10:35.172213] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.940 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.940 [2024-07-10 14:10:35.316369] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.198 [2024-07-10 14:10:35.585012] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.198 [2024-07-10 14:10:35.585090] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.198 [2024-07-10 14:10:35.585119] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.198 [2024-07-10 14:10:35.585140] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.198 [2024-07-10 14:10:35.585162] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.198 [2024-07-10 14:10:35.585290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.199 [2024-07-10 14:10:35.585349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.199 [2024-07-10 14:10:35.585395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.199 [2024-07-10 14:10:35.585406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.765 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.765 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:26.765 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.765 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:26.765 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.765 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.765 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:26.765 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:26.765 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.765 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.765 [2024-07-10 14:10:36.193861] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.765 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.765 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:26.765 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.765 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.331 Malloc1 00:08:27.331 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.331 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:27.331 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.331 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.331 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.331 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:27.331 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.331 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.331 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.331 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.331 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.331 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.331 [2024-07-10 14:10:36.772077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.331 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.332 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:27.332 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:27.332 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:27.332 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:27.332 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:27.332 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:27.332 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.332 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.332 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.332 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:27.332 { 00:08:27.332 "name": "Malloc1", 00:08:27.332 "aliases": [ 00:08:27.332 "a088a0da-32f6-4c1d-bb19-4145759177dd" 00:08:27.332 ], 00:08:27.332 "product_name": "Malloc disk", 00:08:27.332 "block_size": 512, 00:08:27.332 "num_blocks": 1048576, 00:08:27.332 "uuid": "a088a0da-32f6-4c1d-bb19-4145759177dd", 00:08:27.332 "assigned_rate_limits": { 00:08:27.332 "rw_ios_per_sec": 0, 00:08:27.332 "rw_mbytes_per_sec": 0, 00:08:27.332 "r_mbytes_per_sec": 0, 00:08:27.332 "w_mbytes_per_sec": 0 00:08:27.332 }, 00:08:27.332 "claimed": true, 00:08:27.332 "claim_type": "exclusive_write", 00:08:27.332 "zoned": false, 00:08:27.332 "supported_io_types": { 00:08:27.332 "read": true, 00:08:27.332 "write": true, 00:08:27.332 "unmap": true, 00:08:27.332 "flush": true, 00:08:27.332 "reset": true, 00:08:27.332 "nvme_admin": false, 00:08:27.332 "nvme_io": false, 00:08:27.332 "nvme_io_md": false, 00:08:27.332 "write_zeroes": true, 00:08:27.332 "zcopy": true, 00:08:27.332 "get_zone_info": false, 00:08:27.332 "zone_management": false, 00:08:27.332 "zone_append": false, 00:08:27.332 "compare": false, 00:08:27.332 "compare_and_write": false, 00:08:27.332 "abort": true, 00:08:27.332 "seek_hole": false, 00:08:27.332 "seek_data": false, 00:08:27.332 "copy": true, 00:08:27.332 "nvme_iov_md": false 00:08:27.332 }, 00:08:27.332 "memory_domains": [ 00:08:27.332 { 00:08:27.332 "dma_device_id": "system", 00:08:27.332 "dma_device_type": 1 00:08:27.332 }, 00:08:27.332 { 00:08:27.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.332 "dma_device_type": 2 00:08:27.332 } 00:08:27.332 ], 00:08:27.332 "driver_specific": {} 00:08:27.332 } 00:08:27.332 ]' 00:08:27.332 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:27.589 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:27.590 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:27.590 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:27.590 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:27.590 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:27.590 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:27.590 14:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:28.155 14:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:28.155 14:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:28.155 14:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:28.155 14:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:28.155 14:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:30.099 14:10:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:30.099 14:10:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:30.099 14:10:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:30.099 14:10:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:30.099 14:10:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:30.099 14:10:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:30.099 14:10:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:30.099 14:10:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:30.357 14:10:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:30.357 14:10:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:30.357 14:10:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:30.357 14:10:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:30.357 14:10:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:30.357 14:10:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:30.357 14:10:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:30.357 14:10:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:30.357 14:10:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:30.357 14:10:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:31.733 14:10:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:32.667 14:10:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:32.667 14:10:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:32.667 14:10:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:32.667 14:10:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.667 14:10:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:32.667 ************************************ 00:08:32.667 START TEST filesystem_ext4 00:08:32.667 ************************************ 00:08:32.667 14:10:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:32.667 14:10:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:32.667 14:10:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:32.667 14:10:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:32.667 14:10:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:32.667 14:10:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:32.667 14:10:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:32.667 14:10:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:32.667 14:10:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:32.667 14:10:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:32.667 14:10:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:32.667 mke2fs 1.46.5 (30-Dec-2021) 00:08:32.667 Discarding device blocks: 0/522240 done 00:08:32.667 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:32.667 Filesystem UUID: 50017a22-e58d-42e4-93ad-3ab87fa40050 00:08:32.667 Superblock backups stored on blocks: 00:08:32.667 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:32.667 00:08:32.667 Allocating group tables: 0/64 done 00:08:32.667 Writing inode tables: 0/64 done 00:08:33.234 Creating journal (8192 blocks): done 00:08:34.058 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:08:34.058 00:08:34.058 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:34.058 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1272626 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:34.624 00:08:34.624 real 0m2.107s 00:08:34.624 user 0m0.010s 00:08:34.624 sys 0m0.058s 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:34.624 ************************************ 00:08:34.624 END TEST filesystem_ext4 00:08:34.624 ************************************ 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.624 ************************************ 00:08:34.624 START TEST filesystem_btrfs 00:08:34.624 ************************************ 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:34.624 14:10:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:34.882 btrfs-progs v6.6.2 00:08:34.882 See https://btrfs.readthedocs.io for more information. 00:08:34.882 00:08:34.882 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:34.882 NOTE: several default settings have changed in version 5.15, please make sure 00:08:34.882 this does not affect your deployments: 00:08:34.882 - DUP for metadata (-m dup) 00:08:34.882 - enabled no-holes (-O no-holes) 00:08:34.882 - enabled free-space-tree (-R free-space-tree) 00:08:34.882 00:08:34.882 Label: (null) 00:08:34.882 UUID: 7801307c-2f6f-4e29-8688-c32202ec9197 00:08:34.882 Node size: 16384 00:08:34.882 Sector size: 4096 00:08:34.882 Filesystem size: 510.00MiB 00:08:34.882 Block group profiles: 00:08:34.882 Data: single 8.00MiB 00:08:34.882 Metadata: DUP 32.00MiB 00:08:34.882 System: DUP 8.00MiB 00:08:34.882 SSD detected: yes 00:08:34.882 Zoned device: no 00:08:34.882 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:34.882 Runtime features: free-space-tree 00:08:34.882 Checksum: crc32c 00:08:34.882 Number of devices: 1 00:08:34.882 Devices: 00:08:34.882 ID SIZE PATH 00:08:34.882 1 510.00MiB /dev/nvme0n1p1 00:08:34.882 00:08:34.882 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:34.882 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:35.140 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:35.140 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:35.140 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:35.140 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:35.140 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1272626 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:35.141 00:08:35.141 real 0m0.478s 00:08:35.141 user 0m0.025s 00:08:35.141 sys 0m0.099s 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:35.141 ************************************ 00:08:35.141 END TEST filesystem_btrfs 00:08:35.141 ************************************ 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:35.141 ************************************ 00:08:35.141 START TEST filesystem_xfs 00:08:35.141 ************************************ 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:35.141 14:10:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:35.141 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:35.141 = sectsz=512 attr=2, projid32bit=1 00:08:35.141 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:35.141 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:35.141 data = bsize=4096 blocks=130560, imaxpct=25 00:08:35.141 = sunit=0 swidth=0 blks 00:08:35.141 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:35.141 log =internal log bsize=4096 blocks=16384, version=2 00:08:35.141 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:35.141 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:36.515 Discarding blocks...Done. 00:08:36.515 14:10:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:36.515 14:10:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:38.415 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:38.415 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:38.415 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:38.415 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:38.415 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:38.415 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:38.415 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1272626 00:08:38.415 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:38.415 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:38.415 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:38.415 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:38.415 00:08:38.415 real 0m3.323s 00:08:38.415 user 0m0.015s 00:08:38.415 sys 0m0.059s 00:08:38.415 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.415 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:38.415 ************************************ 00:08:38.415 END TEST filesystem_xfs 00:08:38.415 ************************************ 00:08:38.415 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:38.415 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:38.674 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:38.674 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:38.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.674 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:38.674 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:38.674 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:38.674 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:38.674 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:38.674 14:10:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:38.674 14:10:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:38.674 14:10:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:38.674 14:10:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.674 14:10:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:38.674 14:10:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.674 14:10:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:38.674 14:10:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1272626 00:08:38.674 14:10:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1272626 ']' 00:08:38.674 14:10:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1272626 00:08:38.674 14:10:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:38.674 14:10:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:38.674 14:10:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1272626 00:08:38.674 14:10:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:38.674 14:10:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:38.674 14:10:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1272626' 00:08:38.674 killing process with pid 1272626 00:08:38.674 14:10:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1272626 00:08:38.674 14:10:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1272626 00:08:41.204 14:10:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:41.204 00:08:41.204 real 0m15.556s 00:08:41.204 user 0m57.577s 00:08:41.204 sys 0m2.089s 00:08:41.204 14:10:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.204 14:10:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:41.204 ************************************ 00:08:41.204 END TEST nvmf_filesystem_no_in_capsule 00:08:41.204 ************************************ 00:08:41.204 14:10:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:41.204 14:10:50 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:41.204 14:10:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:41.204 14:10:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.204 14:10:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.462 ************************************ 00:08:41.462 START TEST nvmf_filesystem_in_capsule 00:08:41.462 ************************************ 00:08:41.462 14:10:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:41.462 14:10:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:41.462 14:10:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:41.462 14:10:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:41.462 14:10:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:41.462 14:10:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:41.462 14:10:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1274595 00:08:41.462 14:10:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:41.462 14:10:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1274595 00:08:41.462 14:10:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1274595 ']' 00:08:41.462 14:10:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.462 14:10:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.462 14:10:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.462 14:10:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.462 14:10:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:41.462 [2024-07-10 14:10:50.794117] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:08:41.462 [2024-07-10 14:10:50.794249] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.462 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.462 [2024-07-10 14:10:50.930285] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.721 [2024-07-10 14:10:51.195242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.721 [2024-07-10 14:10:51.195319] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.721 [2024-07-10 14:10:51.195347] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.721 [2024-07-10 14:10:51.195367] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.721 [2024-07-10 14:10:51.195387] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.721 [2024-07-10 14:10:51.195824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.721 [2024-07-10 14:10:51.195881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.721 [2024-07-10 14:10:51.195928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.721 [2024-07-10 14:10:51.195940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.286 14:10:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:42.286 14:10:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:42.286 14:10:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:42.286 14:10:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:42.286 14:10:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:42.286 14:10:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.286 14:10:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:42.286 14:10:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:42.286 14:10:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.286 14:10:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:42.286 [2024-07-10 14:10:51.733579] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.286 14:10:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.286 14:10:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:42.286 14:10:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.286 14:10:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:42.851 Malloc1 00:08:42.851 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.851 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:42.851 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.851 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:42.851 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.851 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:42.851 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.851 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:42.851 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.851 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.851 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.851 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:42.851 [2024-07-10 14:10:52.314517] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.851 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.851 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:42.851 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:42.851 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:42.851 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:42.852 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:42.852 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:42.852 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.852 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:43.109 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.109 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:43.109 { 00:08:43.109 "name": "Malloc1", 00:08:43.109 "aliases": [ 00:08:43.109 "a69007a6-dfad-42ec-b536-391bb144d3e1" 00:08:43.109 ], 00:08:43.109 "product_name": "Malloc disk", 00:08:43.109 "block_size": 512, 00:08:43.109 "num_blocks": 1048576, 00:08:43.109 "uuid": "a69007a6-dfad-42ec-b536-391bb144d3e1", 00:08:43.109 "assigned_rate_limits": { 00:08:43.109 "rw_ios_per_sec": 0, 00:08:43.109 "rw_mbytes_per_sec": 0, 00:08:43.109 "r_mbytes_per_sec": 0, 00:08:43.109 "w_mbytes_per_sec": 0 00:08:43.109 }, 00:08:43.109 "claimed": true, 00:08:43.109 "claim_type": "exclusive_write", 00:08:43.109 "zoned": false, 00:08:43.109 "supported_io_types": { 00:08:43.109 "read": true, 00:08:43.109 "write": true, 00:08:43.109 "unmap": true, 00:08:43.109 "flush": true, 00:08:43.109 "reset": true, 00:08:43.109 "nvme_admin": false, 00:08:43.109 "nvme_io": false, 00:08:43.109 "nvme_io_md": false, 00:08:43.109 "write_zeroes": true, 00:08:43.109 "zcopy": true, 00:08:43.109 "get_zone_info": false, 00:08:43.109 "zone_management": false, 00:08:43.109 "zone_append": false, 00:08:43.109 "compare": false, 00:08:43.109 "compare_and_write": false, 00:08:43.109 "abort": true, 00:08:43.109 "seek_hole": false, 00:08:43.109 "seek_data": false, 00:08:43.109 "copy": true, 00:08:43.109 "nvme_iov_md": false 00:08:43.109 }, 00:08:43.109 "memory_domains": [ 00:08:43.109 { 00:08:43.109 "dma_device_id": "system", 00:08:43.109 "dma_device_type": 1 00:08:43.109 }, 00:08:43.109 { 00:08:43.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.109 "dma_device_type": 2 00:08:43.109 } 00:08:43.109 ], 00:08:43.109 "driver_specific": {} 00:08:43.109 } 00:08:43.109 ]' 00:08:43.109 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:43.109 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:43.109 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:43.109 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:43.109 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:43.109 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:43.109 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:43.109 14:10:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:43.673 14:10:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:43.673 14:10:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:43.673 14:10:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:43.673 14:10:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:43.673 14:10:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:45.568 14:10:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:45.568 14:10:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:45.568 14:10:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:45.568 14:10:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:45.568 14:10:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:45.568 14:10:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:45.568 14:10:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:45.568 14:10:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:45.826 14:10:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:45.826 14:10:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:45.826 14:10:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:45.826 14:10:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:45.826 14:10:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:45.826 14:10:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:45.826 14:10:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:45.826 14:10:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:45.826 14:10:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:45.826 14:10:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:46.757 14:10:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:47.684 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:47.684 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:47.684 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:47.684 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.684 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:47.941 ************************************ 00:08:47.941 START TEST filesystem_in_capsule_ext4 00:08:47.941 ************************************ 00:08:47.941 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:47.941 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:47.941 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:47.941 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:47.941 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:47.941 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:47.941 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:47.941 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:47.941 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:47.941 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:47.941 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:47.941 mke2fs 1.46.5 (30-Dec-2021) 00:08:47.941 Discarding device blocks: 0/522240 done 00:08:47.941 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:47.941 Filesystem UUID: f4c891ef-1744-4a93-a7b4-21464be964f4 00:08:47.941 Superblock backups stored on blocks: 00:08:47.941 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:47.941 00:08:47.941 Allocating group tables: 0/64 done 00:08:47.941 Writing inode tables: 0/64 done 00:08:48.199 Creating journal (8192 blocks): done 00:08:48.199 Writing superblocks and filesystem accounting information: 0/64 done 00:08:48.199 00:08:48.199 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:48.199 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:48.199 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1274595 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:48.456 00:08:48.456 real 0m0.562s 00:08:48.456 user 0m0.020s 00:08:48.456 sys 0m0.052s 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:48.456 ************************************ 00:08:48.456 END TEST filesystem_in_capsule_ext4 00:08:48.456 ************************************ 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.456 ************************************ 00:08:48.456 START TEST filesystem_in_capsule_btrfs 00:08:48.456 ************************************ 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:48.456 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:48.712 btrfs-progs v6.6.2 00:08:48.712 See https://btrfs.readthedocs.io for more information. 00:08:48.712 00:08:48.712 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:48.712 NOTE: several default settings have changed in version 5.15, please make sure 00:08:48.712 this does not affect your deployments: 00:08:48.712 - DUP for metadata (-m dup) 00:08:48.712 - enabled no-holes (-O no-holes) 00:08:48.712 - enabled free-space-tree (-R free-space-tree) 00:08:48.712 00:08:48.712 Label: (null) 00:08:48.712 UUID: 458b734e-e400-4540-95cb-652792e3d711 00:08:48.712 Node size: 16384 00:08:48.712 Sector size: 4096 00:08:48.712 Filesystem size: 510.00MiB 00:08:48.712 Block group profiles: 00:08:48.712 Data: single 8.00MiB 00:08:48.712 Metadata: DUP 32.00MiB 00:08:48.712 System: DUP 8.00MiB 00:08:48.712 SSD detected: yes 00:08:48.712 Zoned device: no 00:08:48.712 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:48.712 Runtime features: free-space-tree 00:08:48.712 Checksum: crc32c 00:08:48.712 Number of devices: 1 00:08:48.712 Devices: 00:08:48.712 ID SIZE PATH 00:08:48.712 1 510.00MiB /dev/nvme0n1p1 00:08:48.712 00:08:48.712 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:48.712 14:10:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:49.642 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:49.642 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:49.642 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:49.642 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:49.642 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:49.642 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:49.642 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1274595 00:08:49.642 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:49.642 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:49.642 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:49.642 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:49.642 00:08:49.642 real 0m1.153s 00:08:49.642 user 0m0.026s 00:08:49.642 sys 0m0.104s 00:08:49.642 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.642 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:49.642 ************************************ 00:08:49.642 END TEST filesystem_in_capsule_btrfs 00:08:49.642 ************************************ 00:08:49.642 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:49.642 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:49.642 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:49.643 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.643 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:49.643 ************************************ 00:08:49.643 START TEST filesystem_in_capsule_xfs 00:08:49.643 ************************************ 00:08:49.643 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:49.643 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:49.643 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:49.643 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:49.643 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:49.643 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:49.643 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:49.643 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:49.643 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:49.643 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:49.643 14:10:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:49.643 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:49.643 = sectsz=512 attr=2, projid32bit=1 00:08:49.643 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:49.643 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:49.643 data = bsize=4096 blocks=130560, imaxpct=25 00:08:49.643 = sunit=0 swidth=0 blks 00:08:49.643 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:49.643 log =internal log bsize=4096 blocks=16384, version=2 00:08:49.643 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:49.643 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:50.574 Discarding blocks...Done. 00:08:50.574 14:10:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:50.574 14:10:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:53.099 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:53.100 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:53.100 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:53.100 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:53.100 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:53.100 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:53.100 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1274595 00:08:53.100 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:53.100 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:53.100 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:53.100 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:53.100 00:08:53.100 real 0m3.284s 00:08:53.100 user 0m0.022s 00:08:53.100 sys 0m0.051s 00:08:53.100 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.100 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:53.100 ************************************ 00:08:53.100 END TEST filesystem_in_capsule_xfs 00:08:53.100 ************************************ 00:08:53.100 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:53.100 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:53.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1274595 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1274595 ']' 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1274595 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1274595 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1274595' 00:08:53.358 killing process with pid 1274595 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1274595 00:08:53.358 14:11:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1274595 00:08:56.639 14:11:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:56.639 00:08:56.639 real 0m14.684s 00:08:56.639 user 0m54.118s 00:08:56.639 sys 0m2.014s 00:08:56.639 14:11:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:56.639 14:11:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:56.639 ************************************ 00:08:56.639 END TEST nvmf_filesystem_in_capsule 00:08:56.639 ************************************ 00:08:56.639 14:11:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:56.640 14:11:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:56.640 14:11:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:56.640 14:11:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:56.640 14:11:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:56.640 14:11:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:56.640 14:11:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:56.640 14:11:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:56.640 rmmod nvme_tcp 00:08:56.640 rmmod nvme_fabrics 00:08:56.640 rmmod nvme_keyring 00:08:56.640 14:11:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:56.640 14:11:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:56.640 14:11:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:56.640 14:11:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:56.640 14:11:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:56.640 14:11:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:56.640 14:11:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:56.640 14:11:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:56.640 14:11:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:56.640 14:11:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.640 14:11:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.640 14:11:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.017 14:11:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:58.017 00:08:58.017 real 0m34.702s 00:08:58.017 user 1m52.584s 00:08:58.017 sys 0m5.675s 00:08:58.017 14:11:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.017 14:11:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:58.017 ************************************ 00:08:58.017 END TEST nvmf_filesystem 00:08:58.017 ************************************ 00:08:58.276 14:11:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:58.276 14:11:07 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:58.276 14:11:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:58.276 14:11:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.276 14:11:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:58.276 ************************************ 00:08:58.276 START TEST nvmf_target_discovery 00:08:58.276 ************************************ 00:08:58.276 14:11:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:58.276 * Looking for test storage... 00:08:58.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.276 14:11:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.276 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:58.276 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.276 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.276 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.276 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.276 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.276 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.276 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.276 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.276 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.276 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.276 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:58.276 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:58.276 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.276 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.276 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.276 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:58.277 14:11:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:00.178 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:00.179 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:00.179 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:00.179 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:00.179 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:00.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:09:00.179 00:09:00.179 --- 10.0.0.2 ping statistics --- 00:09:00.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.179 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:09:00.179 00:09:00.179 --- 10.0.0.1 ping statistics --- 00:09:00.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.179 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1278468 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1278468 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1278468 ']' 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:00.179 14:11:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:00.438 [2024-07-10 14:11:09.709977] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:09:00.438 [2024-07-10 14:11:09.710109] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.438 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.438 [2024-07-10 14:11:09.851296] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:00.696 [2024-07-10 14:11:10.121497] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.696 [2024-07-10 14:11:10.121575] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.696 [2024-07-10 14:11:10.121603] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.696 [2024-07-10 14:11:10.121624] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.696 [2024-07-10 14:11:10.121646] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.696 [2024-07-10 14:11:10.121778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.696 [2024-07-10 14:11:10.121841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.696 [2024-07-10 14:11:10.121880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.696 [2024-07-10 14:11:10.121895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.262 [2024-07-10 14:11:10.677592] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.262 Null1 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.262 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.263 [2024-07-10 14:11:10.718753] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.263 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.263 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:01.263 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:01.263 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.263 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.263 Null2 00:09:01.263 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.263 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:01.263 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.263 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.263 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.263 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:01.263 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.263 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.534 Null3 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.534 Null4 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:01.534 00:09:01.534 Discovery Log Number of Records 6, Generation counter 6 00:09:01.534 =====Discovery Log Entry 0====== 00:09:01.534 trtype: tcp 00:09:01.534 adrfam: ipv4 00:09:01.534 subtype: current discovery subsystem 00:09:01.534 treq: not required 00:09:01.534 portid: 0 00:09:01.534 trsvcid: 4420 00:09:01.534 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:01.534 traddr: 10.0.0.2 00:09:01.534 eflags: explicit discovery connections, duplicate discovery information 00:09:01.534 sectype: none 00:09:01.534 =====Discovery Log Entry 1====== 00:09:01.534 trtype: tcp 00:09:01.534 adrfam: ipv4 00:09:01.534 subtype: nvme subsystem 00:09:01.534 treq: not required 00:09:01.534 portid: 0 00:09:01.534 trsvcid: 4420 00:09:01.534 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:01.534 traddr: 10.0.0.2 00:09:01.534 eflags: none 00:09:01.534 sectype: none 00:09:01.534 =====Discovery Log Entry 2====== 00:09:01.534 trtype: tcp 00:09:01.534 adrfam: ipv4 00:09:01.534 subtype: nvme subsystem 00:09:01.534 treq: not required 00:09:01.534 portid: 0 00:09:01.534 trsvcid: 4420 00:09:01.534 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:01.534 traddr: 10.0.0.2 00:09:01.534 eflags: none 00:09:01.534 sectype: none 00:09:01.534 =====Discovery Log Entry 3====== 00:09:01.534 trtype: tcp 00:09:01.534 adrfam: ipv4 00:09:01.534 subtype: nvme subsystem 00:09:01.534 treq: not required 00:09:01.534 portid: 0 00:09:01.534 trsvcid: 4420 00:09:01.534 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:01.534 traddr: 10.0.0.2 00:09:01.534 eflags: none 00:09:01.534 sectype: none 00:09:01.534 =====Discovery Log Entry 4====== 00:09:01.534 trtype: tcp 00:09:01.534 adrfam: ipv4 00:09:01.534 subtype: nvme subsystem 00:09:01.534 treq: not required 00:09:01.534 portid: 0 00:09:01.534 trsvcid: 4420 00:09:01.534 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:01.534 traddr: 10.0.0.2 00:09:01.534 eflags: none 00:09:01.534 sectype: none 00:09:01.534 =====Discovery Log Entry 5====== 00:09:01.534 trtype: tcp 00:09:01.534 adrfam: ipv4 00:09:01.534 subtype: discovery subsystem referral 00:09:01.534 treq: not required 00:09:01.534 portid: 0 00:09:01.534 trsvcid: 4430 00:09:01.534 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:01.534 traddr: 10.0.0.2 00:09:01.534 eflags: none 00:09:01.534 sectype: none 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:01.534 Perform nvmf subsystem discovery via RPC 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.534 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.535 [ 00:09:01.535 { 00:09:01.535 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:01.535 "subtype": "Discovery", 00:09:01.535 "listen_addresses": [ 00:09:01.535 { 00:09:01.535 "trtype": "TCP", 00:09:01.535 "adrfam": "IPv4", 00:09:01.535 "traddr": "10.0.0.2", 00:09:01.535 "trsvcid": "4420" 00:09:01.535 } 00:09:01.535 ], 00:09:01.535 "allow_any_host": true, 00:09:01.535 "hosts": [] 00:09:01.535 }, 00:09:01.535 { 00:09:01.535 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:01.535 "subtype": "NVMe", 00:09:01.535 "listen_addresses": [ 00:09:01.535 { 00:09:01.535 "trtype": "TCP", 00:09:01.535 "adrfam": "IPv4", 00:09:01.535 "traddr": "10.0.0.2", 00:09:01.535 "trsvcid": "4420" 00:09:01.535 } 00:09:01.535 ], 00:09:01.535 "allow_any_host": true, 00:09:01.535 "hosts": [], 00:09:01.535 "serial_number": "SPDK00000000000001", 00:09:01.535 "model_number": "SPDK bdev Controller", 00:09:01.535 "max_namespaces": 32, 00:09:01.535 "min_cntlid": 1, 00:09:01.535 "max_cntlid": 65519, 00:09:01.535 "namespaces": [ 00:09:01.535 { 00:09:01.535 "nsid": 1, 00:09:01.535 "bdev_name": "Null1", 00:09:01.535 "name": "Null1", 00:09:01.535 "nguid": "59255E74E7704E52B44AAD7E64098A23", 00:09:01.535 "uuid": "59255e74-e770-4e52-b44a-ad7e64098a23" 00:09:01.535 } 00:09:01.535 ] 00:09:01.535 }, 00:09:01.535 { 00:09:01.535 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:01.535 "subtype": "NVMe", 00:09:01.535 "listen_addresses": [ 00:09:01.535 { 00:09:01.535 "trtype": "TCP", 00:09:01.535 "adrfam": "IPv4", 00:09:01.535 "traddr": "10.0.0.2", 00:09:01.535 "trsvcid": "4420" 00:09:01.535 } 00:09:01.535 ], 00:09:01.535 "allow_any_host": true, 00:09:01.535 "hosts": [], 00:09:01.535 "serial_number": "SPDK00000000000002", 00:09:01.535 "model_number": "SPDK bdev Controller", 00:09:01.535 "max_namespaces": 32, 00:09:01.535 "min_cntlid": 1, 00:09:01.535 "max_cntlid": 65519, 00:09:01.535 "namespaces": [ 00:09:01.535 { 00:09:01.535 "nsid": 1, 00:09:01.535 "bdev_name": "Null2", 00:09:01.535 "name": "Null2", 00:09:01.535 "nguid": "1237884DC7DC41AD9AF5A0AC1BDC8360", 00:09:01.535 "uuid": "1237884d-c7dc-41ad-9af5-a0ac1bdc8360" 00:09:01.535 } 00:09:01.535 ] 00:09:01.535 }, 00:09:01.535 { 00:09:01.535 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:01.535 "subtype": "NVMe", 00:09:01.535 "listen_addresses": [ 00:09:01.535 { 00:09:01.535 "trtype": "TCP", 00:09:01.535 "adrfam": "IPv4", 00:09:01.535 "traddr": "10.0.0.2", 00:09:01.535 "trsvcid": "4420" 00:09:01.535 } 00:09:01.535 ], 00:09:01.535 "allow_any_host": true, 00:09:01.535 "hosts": [], 00:09:01.535 "serial_number": "SPDK00000000000003", 00:09:01.535 "model_number": "SPDK bdev Controller", 00:09:01.535 "max_namespaces": 32, 00:09:01.535 "min_cntlid": 1, 00:09:01.535 "max_cntlid": 65519, 00:09:01.535 "namespaces": [ 00:09:01.535 { 00:09:01.535 "nsid": 1, 00:09:01.535 "bdev_name": "Null3", 00:09:01.535 "name": "Null3", 00:09:01.535 "nguid": "AB8A9DDA027140B7900441E1ADADC2CB", 00:09:01.535 "uuid": "ab8a9dda-0271-40b7-9004-41e1adadc2cb" 00:09:01.535 } 00:09:01.535 ] 00:09:01.535 }, 00:09:01.535 { 00:09:01.535 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:01.535 "subtype": "NVMe", 00:09:01.535 "listen_addresses": [ 00:09:01.535 { 00:09:01.535 "trtype": "TCP", 00:09:01.535 "adrfam": "IPv4", 00:09:01.535 "traddr": "10.0.0.2", 00:09:01.535 "trsvcid": "4420" 00:09:01.535 } 00:09:01.535 ], 00:09:01.535 "allow_any_host": true, 00:09:01.535 "hosts": [], 00:09:01.535 "serial_number": "SPDK00000000000004", 00:09:01.535 "model_number": "SPDK bdev Controller", 00:09:01.535 "max_namespaces": 32, 00:09:01.535 "min_cntlid": 1, 00:09:01.535 "max_cntlid": 65519, 00:09:01.535 "namespaces": [ 00:09:01.535 { 00:09:01.535 "nsid": 1, 00:09:01.535 "bdev_name": "Null4", 00:09:01.535 "name": "Null4", 00:09:01.535 "nguid": "41256E1046EF4C5A8BC618DECE1EB59D", 00:09:01.535 "uuid": "41256e10-46ef-4c5a-8bc6-18dece1eb59d" 00:09:01.535 } 00:09:01.535 ] 00:09:01.535 } 00:09:01.535 ] 00:09:01.535 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.535 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:01.535 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:01.535 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:01.535 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.535 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.535 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.535 14:11:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:01.535 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.535 14:11:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.535 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.535 14:11:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:01.535 14:11:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:01.535 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.535 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.535 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.535 14:11:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:01.535 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.535 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.793 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.793 14:11:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:01.793 14:11:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:01.793 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.793 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.793 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.793 14:11:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:01.793 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.793 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.793 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.793 14:11:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:01.793 14:11:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:01.793 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:01.794 rmmod nvme_tcp 00:09:01.794 rmmod nvme_fabrics 00:09:01.794 rmmod nvme_keyring 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1278468 ']' 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1278468 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1278468 ']' 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1278468 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1278468 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1278468' 00:09:01.794 killing process with pid 1278468 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1278468 00:09:01.794 14:11:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1278468 00:09:03.261 14:11:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:03.261 14:11:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:03.261 14:11:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:03.261 14:11:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:03.261 14:11:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:03.261 14:11:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.261 14:11:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.261 14:11:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.165 14:11:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:05.165 00:09:05.165 real 0m6.950s 00:09:05.165 user 0m8.770s 00:09:05.165 sys 0m1.892s 00:09:05.165 14:11:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.165 14:11:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.165 ************************************ 00:09:05.165 END TEST nvmf_target_discovery 00:09:05.165 ************************************ 00:09:05.165 14:11:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:05.166 14:11:14 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:05.166 14:11:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:05.166 14:11:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.166 14:11:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:05.166 ************************************ 00:09:05.166 START TEST nvmf_referrals 00:09:05.166 ************************************ 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:05.166 * Looking for test storage... 00:09:05.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:05.166 14:11:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:07.701 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:07.701 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:07.701 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.701 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:07.702 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:07.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:07.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:09:07.702 00:09:07.702 --- 10.0.0.2 ping statistics --- 00:09:07.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.702 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:07.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:07.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:09:07.702 00:09:07.702 --- 10.0.0.1 ping statistics --- 00:09:07.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.702 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1280701 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1280701 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1280701 ']' 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:07.702 14:11:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:07.702 [2024-07-10 14:11:16.913260] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:09:07.702 [2024-07-10 14:11:16.913392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.702 EAL: No free 2048 kB hugepages reported on node 1 00:09:07.702 [2024-07-10 14:11:17.047399] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:07.961 [2024-07-10 14:11:17.308787] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.961 [2024-07-10 14:11:17.308860] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.961 [2024-07-10 14:11:17.308899] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.961 [2024-07-10 14:11:17.308920] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.961 [2024-07-10 14:11:17.308941] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.961 [2024-07-10 14:11:17.309081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.961 [2024-07-10 14:11:17.309155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.961 [2024-07-10 14:11:17.309240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.961 [2024-07-10 14:11:17.309248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:08.528 [2024-07-10 14:11:17.894933] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:08.528 [2024-07-10 14:11:17.908187] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:08.528 14:11:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:08.786 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:09.044 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:09.302 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:09.302 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:09.302 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:09.302 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:09.302 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:09.302 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:09.302 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:09.302 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:09.302 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.302 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:09.560 14:11:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:09.819 14:11:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:10.077 rmmod nvme_tcp 00:09:10.077 rmmod nvme_fabrics 00:09:10.077 rmmod nvme_keyring 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1280701 ']' 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1280701 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1280701 ']' 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1280701 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1280701 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1280701' 00:09:10.077 killing process with pid 1280701 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1280701 00:09:10.077 14:11:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1280701 00:09:11.450 14:11:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:11.450 14:11:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:11.450 14:11:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:11.450 14:11:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:11.450 14:11:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:11.450 14:11:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.450 14:11:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:11.450 14:11:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.363 14:11:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:13.363 00:09:13.363 real 0m8.249s 00:09:13.363 user 0m13.821s 00:09:13.363 sys 0m2.330s 00:09:13.363 14:11:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.363 14:11:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:13.363 ************************************ 00:09:13.363 END TEST nvmf_referrals 00:09:13.363 ************************************ 00:09:13.363 14:11:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:13.363 14:11:22 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:13.363 14:11:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:13.363 14:11:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.363 14:11:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:13.363 ************************************ 00:09:13.363 START TEST nvmf_connect_disconnect 00:09:13.363 ************************************ 00:09:13.363 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:13.621 * Looking for test storage... 00:09:13.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.621 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:13.622 14:11:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:15.524 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:15.524 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:15.524 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:15.524 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:15.524 14:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:15.524 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:15.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:09:15.783 00:09:15.783 --- 10.0.0.2 ping statistics --- 00:09:15.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.783 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:15.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:09:15.783 00:09:15.783 --- 10.0.0.1 ping statistics --- 00:09:15.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.783 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1283139 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1283139 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1283139 ']' 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.783 14:11:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:15.783 [2024-07-10 14:11:25.135221] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:09:15.783 [2024-07-10 14:11:25.135362] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.783 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.042 [2024-07-10 14:11:25.282134] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.300 [2024-07-10 14:11:25.550276] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.300 [2024-07-10 14:11:25.550351] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.300 [2024-07-10 14:11:25.550379] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.300 [2024-07-10 14:11:25.550401] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.300 [2024-07-10 14:11:25.550441] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.300 [2024-07-10 14:11:25.550560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.300 [2024-07-10 14:11:25.550620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.300 [2024-07-10 14:11:25.550666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.300 [2024-07-10 14:11:25.550678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.866 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.866 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:09:16.866 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:16.866 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:16.866 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:16.866 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.866 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:16.866 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.866 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:16.866 [2024-07-10 14:11:26.111761] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.866 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.866 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:16.866 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.866 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:16.866 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.866 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:16.866 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:16.866 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.866 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:16.866 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.866 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:16.867 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.867 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:16.867 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.867 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.867 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.867 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:16.867 [2024-07-10 14:11:26.218217] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.867 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.867 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:16.867 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:16.867 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:16.867 14:11:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:19.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:09.715 rmmod nvme_tcp 00:13:09.715 rmmod nvme_fabrics 00:13:09.715 rmmod nvme_keyring 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1283139 ']' 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1283139 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1283139 ']' 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1283139 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1283139 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1283139' 00:13:09.715 killing process with pid 1283139 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1283139 00:13:09.715 14:15:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1283139 00:13:11.090 14:15:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:11.090 14:15:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:11.090 14:15:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:11.090 14:15:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:11.090 14:15:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:11.090 14:15:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.090 14:15:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.090 14:15:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.992 14:15:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:12.992 00:13:12.992 real 3m59.362s 00:13:12.992 user 15m1.942s 00:13:12.992 sys 0m40.189s 00:13:12.992 14:15:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:12.992 14:15:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:12.992 ************************************ 00:13:12.992 END TEST nvmf_connect_disconnect 00:13:12.992 ************************************ 00:13:12.992 14:15:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:12.992 14:15:22 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:12.992 14:15:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:12.992 14:15:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:12.992 14:15:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:12.992 ************************************ 00:13:12.992 START TEST nvmf_multitarget 00:13:12.992 ************************************ 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:12.992 * Looking for test storage... 00:13:12.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:12.992 14:15:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:14.894 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:14.894 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:14.894 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:14.894 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:14.894 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.895 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.895 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:14.895 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:14.895 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:14.895 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:14.895 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:14.895 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:14.895 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.895 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:14.895 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:14.895 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:14.895 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:14.895 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:14.895 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:14.895 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:14.895 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:15.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:13:15.154 00:13:15.154 --- 10.0.0.2 ping statistics --- 00:13:15.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.154 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:13:15.154 00:13:15.154 --- 10.0.0.1 ping statistics --- 00:13:15.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.154 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1315173 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1315173 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1315173 ']' 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.154 14:15:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:15.154 [2024-07-10 14:15:24.537090] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:13:15.154 [2024-07-10 14:15:24.537254] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.154 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.413 [2024-07-10 14:15:24.680540] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.671 [2024-07-10 14:15:24.954944] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.671 [2024-07-10 14:15:24.955020] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.671 [2024-07-10 14:15:24.955048] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.671 [2024-07-10 14:15:24.955069] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.671 [2024-07-10 14:15:24.955092] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.671 [2024-07-10 14:15:24.955219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.671 [2024-07-10 14:15:24.955295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.671 [2024-07-10 14:15:24.955334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.671 [2024-07-10 14:15:24.955345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.236 14:15:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.236 14:15:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:13:16.236 14:15:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:16.236 14:15:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:16.236 14:15:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:16.236 14:15:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.236 14:15:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:16.236 14:15:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:16.236 14:15:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:16.236 14:15:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:16.236 14:15:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:16.493 "nvmf_tgt_1" 00:13:16.493 14:15:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:16.493 "nvmf_tgt_2" 00:13:16.493 14:15:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:16.493 14:15:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:16.750 14:15:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:16.750 14:15:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:16.750 true 00:13:16.750 14:15:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:16.750 true 00:13:16.750 14:15:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:16.750 14:15:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:17.007 14:15:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:17.007 14:15:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:17.007 14:15:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:17.007 14:15:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:17.007 14:15:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:17.007 14:15:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:17.007 14:15:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:17.007 14:15:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:17.007 14:15:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:17.007 rmmod nvme_tcp 00:13:17.007 rmmod nvme_fabrics 00:13:17.007 rmmod nvme_keyring 00:13:17.007 14:15:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:17.007 14:15:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:17.007 14:15:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:17.007 14:15:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1315173 ']' 00:13:17.007 14:15:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1315173 00:13:17.007 14:15:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1315173 ']' 00:13:17.007 14:15:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1315173 00:13:17.007 14:15:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:13:17.007 14:15:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:17.008 14:15:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1315173 00:13:17.008 14:15:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:17.008 14:15:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:17.008 14:15:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1315173' 00:13:17.008 killing process with pid 1315173 00:13:17.008 14:15:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1315173 00:13:17.008 14:15:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1315173 00:13:18.380 14:15:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:18.380 14:15:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:18.380 14:15:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:18.380 14:15:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:18.380 14:15:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:18.380 14:15:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.380 14:15:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.380 14:15:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.280 14:15:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:20.280 00:13:20.280 real 0m7.476s 00:13:20.280 user 0m11.515s 00:13:20.280 sys 0m2.072s 00:13:20.280 14:15:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:20.280 14:15:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:20.280 ************************************ 00:13:20.280 END TEST nvmf_multitarget 00:13:20.280 ************************************ 00:13:20.280 14:15:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:20.281 14:15:29 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:20.281 14:15:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:20.281 14:15:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:20.281 14:15:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:20.539 ************************************ 00:13:20.539 START TEST nvmf_rpc 00:13:20.539 ************************************ 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:20.539 * Looking for test storage... 00:13:20.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.539 14:15:29 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:13:20.540 14:15:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:22.439 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:22.439 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:22.439 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:22.439 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:22.439 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.440 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.440 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.440 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.440 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:22.440 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:22.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:13:22.697 00:13:22.697 --- 10.0.0.2 ping statistics --- 00:13:22.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.697 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:13:22.697 00:13:22.697 --- 10.0.0.1 ping statistics --- 00:13:22.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.697 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1317522 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1317522 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1317522 ']' 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.697 14:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.697 [2024-07-10 14:15:32.081982] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:13:22.697 [2024-07-10 14:15:32.082141] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.697 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.954 [2024-07-10 14:15:32.226734] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:23.212 [2024-07-10 14:15:32.492891] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.212 [2024-07-10 14:15:32.492967] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.212 [2024-07-10 14:15:32.492996] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.212 [2024-07-10 14:15:32.493016] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.212 [2024-07-10 14:15:32.493038] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.212 [2024-07-10 14:15:32.493158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.212 [2024-07-10 14:15:32.493213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.212 [2024-07-10 14:15:32.493260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.212 [2024-07-10 14:15:32.493272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.776 14:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.776 14:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:13:23.776 14:15:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:23.776 14:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:23.776 14:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.776 14:15:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.776 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:23.776 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.776 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.776 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.776 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:23.776 "tick_rate": 2700000000, 00:13:23.776 "poll_groups": [ 00:13:23.776 { 00:13:23.776 "name": "nvmf_tgt_poll_group_000", 00:13:23.776 "admin_qpairs": 0, 00:13:23.776 "io_qpairs": 0, 00:13:23.776 "current_admin_qpairs": 0, 00:13:23.776 "current_io_qpairs": 0, 00:13:23.776 "pending_bdev_io": 0, 00:13:23.776 "completed_nvme_io": 0, 00:13:23.776 "transports": [] 00:13:23.776 }, 00:13:23.776 { 00:13:23.776 "name": "nvmf_tgt_poll_group_001", 00:13:23.776 "admin_qpairs": 0, 00:13:23.776 "io_qpairs": 0, 00:13:23.776 "current_admin_qpairs": 0, 00:13:23.776 "current_io_qpairs": 0, 00:13:23.776 "pending_bdev_io": 0, 00:13:23.776 "completed_nvme_io": 0, 00:13:23.776 "transports": [] 00:13:23.776 }, 00:13:23.776 { 00:13:23.776 "name": "nvmf_tgt_poll_group_002", 00:13:23.776 "admin_qpairs": 0, 00:13:23.776 "io_qpairs": 0, 00:13:23.776 "current_admin_qpairs": 0, 00:13:23.776 "current_io_qpairs": 0, 00:13:23.776 "pending_bdev_io": 0, 00:13:23.776 "completed_nvme_io": 0, 00:13:23.776 "transports": [] 00:13:23.776 }, 00:13:23.776 { 00:13:23.776 "name": "nvmf_tgt_poll_group_003", 00:13:23.776 "admin_qpairs": 0, 00:13:23.776 "io_qpairs": 0, 00:13:23.776 "current_admin_qpairs": 0, 00:13:23.776 "current_io_qpairs": 0, 00:13:23.776 "pending_bdev_io": 0, 00:13:23.776 "completed_nvme_io": 0, 00:13:23.776 "transports": [] 00:13:23.776 } 00:13:23.776 ] 00:13:23.776 }' 00:13:23.776 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:23.776 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:23.776 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:23.776 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.777 [2024-07-10 14:15:33.106816] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:23.777 "tick_rate": 2700000000, 00:13:23.777 "poll_groups": [ 00:13:23.777 { 00:13:23.777 "name": "nvmf_tgt_poll_group_000", 00:13:23.777 "admin_qpairs": 0, 00:13:23.777 "io_qpairs": 0, 00:13:23.777 "current_admin_qpairs": 0, 00:13:23.777 "current_io_qpairs": 0, 00:13:23.777 "pending_bdev_io": 0, 00:13:23.777 "completed_nvme_io": 0, 00:13:23.777 "transports": [ 00:13:23.777 { 00:13:23.777 "trtype": "TCP" 00:13:23.777 } 00:13:23.777 ] 00:13:23.777 }, 00:13:23.777 { 00:13:23.777 "name": "nvmf_tgt_poll_group_001", 00:13:23.777 "admin_qpairs": 0, 00:13:23.777 "io_qpairs": 0, 00:13:23.777 "current_admin_qpairs": 0, 00:13:23.777 "current_io_qpairs": 0, 00:13:23.777 "pending_bdev_io": 0, 00:13:23.777 "completed_nvme_io": 0, 00:13:23.777 "transports": [ 00:13:23.777 { 00:13:23.777 "trtype": "TCP" 00:13:23.777 } 00:13:23.777 ] 00:13:23.777 }, 00:13:23.777 { 00:13:23.777 "name": "nvmf_tgt_poll_group_002", 00:13:23.777 "admin_qpairs": 0, 00:13:23.777 "io_qpairs": 0, 00:13:23.777 "current_admin_qpairs": 0, 00:13:23.777 "current_io_qpairs": 0, 00:13:23.777 "pending_bdev_io": 0, 00:13:23.777 "completed_nvme_io": 0, 00:13:23.777 "transports": [ 00:13:23.777 { 00:13:23.777 "trtype": "TCP" 00:13:23.777 } 00:13:23.777 ] 00:13:23.777 }, 00:13:23.777 { 00:13:23.777 "name": "nvmf_tgt_poll_group_003", 00:13:23.777 "admin_qpairs": 0, 00:13:23.777 "io_qpairs": 0, 00:13:23.777 "current_admin_qpairs": 0, 00:13:23.777 "current_io_qpairs": 0, 00:13:23.777 "pending_bdev_io": 0, 00:13:23.777 "completed_nvme_io": 0, 00:13:23.777 "transports": [ 00:13:23.777 { 00:13:23.777 "trtype": "TCP" 00:13:23.777 } 00:13:23.777 ] 00:13:23.777 } 00:13:23.777 ] 00:13:23.777 }' 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.777 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.035 Malloc1 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.035 [2024-07-10 14:15:33.313694] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:24.035 [2024-07-10 14:15:33.336869] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:13:24.035 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:24.035 could not add new controller: failed to write to nvme-fabrics device 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.035 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.600 14:15:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:24.600 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:24.600 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:24.600 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:24.600 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:27.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:27.124 [2024-07-10 14:15:36.286123] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:13:27.124 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:27.124 could not add new controller: failed to write to nvme-fabrics device 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.124 14:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:27.688 14:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:27.688 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:27.688 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:27.688 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:27.688 14:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:29.584 14:15:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:29.584 14:15:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:29.584 14:15:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:29.584 14:15:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:29.584 14:15:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:29.584 14:15:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:29.584 14:15:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:29.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.841 [2024-07-10 14:15:39.147811] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.841 14:15:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:30.405 14:15:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:30.405 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:30.405 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:30.405 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:30.405 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:32.932 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:32.932 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:32.932 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:32.932 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:32.932 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:32.932 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:32.932 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.932 [2024-07-10 14:15:42.070009] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.932 14:15:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.498 14:15:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:33.498 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:33.498 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:33.498 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:33.498 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:35.396 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:35.396 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:35.396 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:35.396 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:35.396 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:35.396 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:35.396 14:15:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:35.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.654 14:15:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:35.654 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:35.654 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:35.654 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.654 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:35.654 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.654 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:35.654 14:15:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:35.654 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.654 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.654 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.654 14:15:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:35.654 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.654 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.654 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.654 14:15:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:35.654 14:15:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:35.654 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.654 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.654 14:15:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.654 14:15:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.654 14:15:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.654 14:15:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.654 [2024-07-10 14:15:45.004601] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.654 14:15:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.654 14:15:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:35.654 14:15:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.654 14:15:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.654 14:15:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.654 14:15:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:35.654 14:15:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.654 14:15:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.654 14:15:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.654 14:15:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:36.220 14:15:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:36.220 14:15:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:36.220 14:15:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:36.220 14:15:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:36.220 14:15:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:38.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.747 [2024-07-10 14:15:47.817569] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.747 14:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:39.314 14:15:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:39.314 14:15:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:39.314 14:15:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:39.314 14:15:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:39.314 14:15:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:41.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.300 [2024-07-10 14:15:50.743998] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.300 14:15:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.234 14:15:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:42.234 14:15:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:42.234 14:15:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.234 14:15:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:42.234 14:15:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:44.131 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:44.131 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:44.131 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.131 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:44.131 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.131 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:44.131 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.131 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:44.131 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:44.131 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.132 [2024-07-10 14:15:53.548839] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.132 [2024-07-10 14:15:53.596833] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.132 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 [2024-07-10 14:15:53.645012] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 [2024-07-10 14:15:53.693145] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 [2024-07-10 14:15:53.741330] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:44.391 "tick_rate": 2700000000, 00:13:44.391 "poll_groups": [ 00:13:44.391 { 00:13:44.391 "name": "nvmf_tgt_poll_group_000", 00:13:44.391 "admin_qpairs": 2, 00:13:44.391 "io_qpairs": 84, 00:13:44.391 "current_admin_qpairs": 0, 00:13:44.391 "current_io_qpairs": 0, 00:13:44.391 "pending_bdev_io": 0, 00:13:44.391 "completed_nvme_io": 157, 00:13:44.391 "transports": [ 00:13:44.391 { 00:13:44.391 "trtype": "TCP" 00:13:44.391 } 00:13:44.391 ] 00:13:44.391 }, 00:13:44.391 { 00:13:44.391 "name": "nvmf_tgt_poll_group_001", 00:13:44.391 "admin_qpairs": 2, 00:13:44.391 "io_qpairs": 84, 00:13:44.391 "current_admin_qpairs": 0, 00:13:44.391 "current_io_qpairs": 0, 00:13:44.391 "pending_bdev_io": 0, 00:13:44.391 "completed_nvme_io": 174, 00:13:44.391 "transports": [ 00:13:44.391 { 00:13:44.391 "trtype": "TCP" 00:13:44.391 } 00:13:44.391 ] 00:13:44.391 }, 00:13:44.391 { 00:13:44.391 "name": "nvmf_tgt_poll_group_002", 00:13:44.391 "admin_qpairs": 1, 00:13:44.391 "io_qpairs": 84, 00:13:44.391 "current_admin_qpairs": 0, 00:13:44.391 "current_io_qpairs": 0, 00:13:44.391 "pending_bdev_io": 0, 00:13:44.391 "completed_nvme_io": 182, 00:13:44.391 "transports": [ 00:13:44.391 { 00:13:44.391 "trtype": "TCP" 00:13:44.391 } 00:13:44.391 ] 00:13:44.391 }, 00:13:44.391 { 00:13:44.391 "name": "nvmf_tgt_poll_group_003", 00:13:44.391 "admin_qpairs": 2, 00:13:44.391 "io_qpairs": 84, 00:13:44.391 "current_admin_qpairs": 0, 00:13:44.391 "current_io_qpairs": 0, 00:13:44.391 "pending_bdev_io": 0, 00:13:44.391 "completed_nvme_io": 173, 00:13:44.391 "transports": [ 00:13:44.391 { 00:13:44.391 "trtype": "TCP" 00:13:44.391 } 00:13:44.391 ] 00:13:44.391 } 00:13:44.391 ] 00:13:44.391 }' 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:44.391 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:44.649 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:44.649 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:44.649 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:44.649 14:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:44.649 14:15:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:44.649 14:15:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:44.649 14:15:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:44.649 14:15:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:44.649 14:15:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:44.649 14:15:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:44.649 rmmod nvme_tcp 00:13:44.649 rmmod nvme_fabrics 00:13:44.649 rmmod nvme_keyring 00:13:44.649 14:15:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:44.650 14:15:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:44.650 14:15:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:44.650 14:15:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1317522 ']' 00:13:44.650 14:15:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1317522 00:13:44.650 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1317522 ']' 00:13:44.650 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1317522 00:13:44.650 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:13:44.650 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:44.650 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1317522 00:13:44.650 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:44.650 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:44.650 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1317522' 00:13:44.650 killing process with pid 1317522 00:13:44.650 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1317522 00:13:44.650 14:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1317522 00:13:46.025 14:15:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:46.025 14:15:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:46.025 14:15:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:46.025 14:15:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:46.025 14:15:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:46.025 14:15:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.025 14:15:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.025 14:15:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.557 14:15:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:48.557 00:13:48.557 real 0m27.717s 00:13:48.557 user 1m29.059s 00:13:48.557 sys 0m4.450s 00:13:48.557 14:15:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:48.557 14:15:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.557 ************************************ 00:13:48.557 END TEST nvmf_rpc 00:13:48.557 ************************************ 00:13:48.557 14:15:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:48.557 14:15:57 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:48.557 14:15:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:48.557 14:15:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:48.557 14:15:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:48.557 ************************************ 00:13:48.557 START TEST nvmf_invalid 00:13:48.557 ************************************ 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:48.557 * Looking for test storage... 00:13:48.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:48.557 14:15:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:50.459 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.459 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:50.460 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:50.460 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:50.460 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:50.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:13:50.460 00:13:50.460 --- 10.0.0.2 ping statistics --- 00:13:50.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.460 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:50.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:13:50.460 00:13:50.460 --- 10.0.0.1 ping statistics --- 00:13:50.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.460 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1322286 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1322286 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1322286 ']' 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:50.460 14:15:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:50.460 [2024-07-10 14:15:59.879622] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:13:50.460 [2024-07-10 14:15:59.879779] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.719 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.719 [2024-07-10 14:16:00.024668] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:50.998 [2024-07-10 14:16:00.294543] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.999 [2024-07-10 14:16:00.294623] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.999 [2024-07-10 14:16:00.294651] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.999 [2024-07-10 14:16:00.294672] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.999 [2024-07-10 14:16:00.294694] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.999 [2024-07-10 14:16:00.294833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.999 [2024-07-10 14:16:00.294894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.999 [2024-07-10 14:16:00.294951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.999 [2024-07-10 14:16:00.294963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:51.565 14:16:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:51.565 14:16:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:13:51.565 14:16:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:51.565 14:16:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:51.565 14:16:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:51.565 14:16:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.565 14:16:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:51.565 14:16:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25784 00:13:51.565 [2024-07-10 14:16:01.021217] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:51.565 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:51.565 { 00:13:51.565 "nqn": "nqn.2016-06.io.spdk:cnode25784", 00:13:51.565 "tgt_name": "foobar", 00:13:51.565 "method": "nvmf_create_subsystem", 00:13:51.565 "req_id": 1 00:13:51.565 } 00:13:51.565 Got JSON-RPC error response 00:13:51.565 response: 00:13:51.565 { 00:13:51.565 "code": -32603, 00:13:51.565 "message": "Unable to find target foobar" 00:13:51.565 }' 00:13:51.565 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:51.565 { 00:13:51.565 "nqn": "nqn.2016-06.io.spdk:cnode25784", 00:13:51.565 "tgt_name": "foobar", 00:13:51.565 "method": "nvmf_create_subsystem", 00:13:51.565 "req_id": 1 00:13:51.565 } 00:13:51.565 Got JSON-RPC error response 00:13:51.565 response: 00:13:51.565 { 00:13:51.565 "code": -32603, 00:13:51.565 "message": "Unable to find target foobar" 00:13:51.565 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:51.565 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:51.565 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8737 00:13:51.822 [2024-07-10 14:16:01.274145] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8737: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:51.822 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:51.822 { 00:13:51.822 "nqn": "nqn.2016-06.io.spdk:cnode8737", 00:13:51.822 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:51.822 "method": "nvmf_create_subsystem", 00:13:51.822 "req_id": 1 00:13:51.822 } 00:13:51.822 Got JSON-RPC error response 00:13:51.822 response: 00:13:51.822 { 00:13:51.822 "code": -32602, 00:13:51.822 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:51.822 }' 00:13:51.822 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:51.822 { 00:13:51.822 "nqn": "nqn.2016-06.io.spdk:cnode8737", 00:13:51.822 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:51.822 "method": "nvmf_create_subsystem", 00:13:51.822 "req_id": 1 00:13:51.822 } 00:13:51.822 Got JSON-RPC error response 00:13:51.822 response: 00:13:51.822 { 00:13:51.822 "code": -32602, 00:13:51.822 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:51.822 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:51.822 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:51.822 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23763 00:13:52.080 [2024-07-10 14:16:01.522906] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23763: invalid model number 'SPDK_Controller' 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:52.080 { 00:13:52.080 "nqn": "nqn.2016-06.io.spdk:cnode23763", 00:13:52.080 "model_number": "SPDK_Controller\u001f", 00:13:52.080 "method": "nvmf_create_subsystem", 00:13:52.080 "req_id": 1 00:13:52.080 } 00:13:52.080 Got JSON-RPC error response 00:13:52.080 response: 00:13:52.080 { 00:13:52.080 "code": -32602, 00:13:52.080 "message": "Invalid MN SPDK_Controller\u001f" 00:13:52.080 }' 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:52.080 { 00:13:52.080 "nqn": "nqn.2016-06.io.spdk:cnode23763", 00:13:52.080 "model_number": "SPDK_Controller\u001f", 00:13:52.080 "method": "nvmf_create_subsystem", 00:13:52.080 "req_id": 1 00:13:52.080 } 00:13:52.080 Got JSON-RPC error response 00:13:52.080 response: 00:13:52.080 { 00:13:52.080 "code": -32602, 00:13:52.080 "message": "Invalid MN SPDK_Controller\u001f" 00:13:52.080 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.080 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:52.338 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 4 == \- ]] 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '4bFD+]%!/5/9TfJ-1U%(S' 00:13:52.339 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '4bFD+]%!/5/9TfJ-1U%(S' nqn.2016-06.io.spdk:cnode4119 00:13:52.597 [2024-07-10 14:16:01.860143] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4119: invalid serial number '4bFD+]%!/5/9TfJ-1U%(S' 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:52.597 { 00:13:52.597 "nqn": "nqn.2016-06.io.spdk:cnode4119", 00:13:52.597 "serial_number": "4bFD+]%!/5/9TfJ-1U%(S", 00:13:52.597 "method": "nvmf_create_subsystem", 00:13:52.597 "req_id": 1 00:13:52.597 } 00:13:52.597 Got JSON-RPC error response 00:13:52.597 response: 00:13:52.597 { 00:13:52.597 "code": -32602, 00:13:52.597 "message": "Invalid SN 4bFD+]%!/5/9TfJ-1U%(S" 00:13:52.597 }' 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:52.597 { 00:13:52.597 "nqn": "nqn.2016-06.io.spdk:cnode4119", 00:13:52.597 "serial_number": "4bFD+]%!/5/9TfJ-1U%(S", 00:13:52.597 "method": "nvmf_create_subsystem", 00:13:52.597 "req_id": 1 00:13:52.597 } 00:13:52.597 Got JSON-RPC error response 00:13:52.597 response: 00:13:52.597 { 00:13:52.597 "code": -32602, 00:13:52.597 "message": "Invalid SN 4bFD+]%!/5/9TfJ-1U%(S" 00:13:52.597 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.597 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:52.598 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 6 == \- ]] 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '6_ofVhr#_"1iuV24pUQ?y'\'';wBLun'\''C~|y?QEn5+L' 00:13:52.599 14:16:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '6_ofVhr#_"1iuV24pUQ?y'\'';wBLun'\''C~|y?QEn5+L' nqn.2016-06.io.spdk:cnode9380 00:13:52.856 [2024-07-10 14:16:02.217366] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9380: invalid model number '6_ofVhr#_"1iuV24pUQ?y';wBLun'C~|y?QEn5+L' 00:13:52.856 14:16:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:52.856 { 00:13:52.856 "nqn": "nqn.2016-06.io.spdk:cnode9380", 00:13:52.856 "model_number": "6_of\u007fVhr#_\"1iuV24pUQ?y'\'';wBLun'\''C~|y?QEn5+L", 00:13:52.856 "method": "nvmf_create_subsystem", 00:13:52.856 "req_id": 1 00:13:52.856 } 00:13:52.856 Got JSON-RPC error response 00:13:52.856 response: 00:13:52.856 { 00:13:52.856 "code": -32602, 00:13:52.856 "message": "Invalid MN 6_of\u007fVhr#_\"1iuV24pUQ?y'\'';wBLun'\''C~|y?QEn5+L" 00:13:52.856 }' 00:13:52.856 14:16:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:52.856 { 00:13:52.856 "nqn": "nqn.2016-06.io.spdk:cnode9380", 00:13:52.856 "model_number": "6_of\u007fVhr#_\"1iuV24pUQ?y';wBLun'C~|y?QEn5+L", 00:13:52.856 "method": "nvmf_create_subsystem", 00:13:52.856 "req_id": 1 00:13:52.856 } 00:13:52.856 Got JSON-RPC error response 00:13:52.856 response: 00:13:52.856 { 00:13:52.856 "code": -32602, 00:13:52.856 "message": "Invalid MN 6_of\u007fVhr#_\"1iuV24pUQ?y';wBLun'C~|y?QEn5+L" 00:13:52.856 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:52.856 14:16:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:53.114 [2024-07-10 14:16:02.470317] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.114 14:16:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:53.372 14:16:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:53.372 14:16:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:53.372 14:16:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:53.372 14:16:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:53.372 14:16:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:53.630 [2024-07-10 14:16:02.985464] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:53.630 14:16:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:53.630 { 00:13:53.630 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:53.630 "listen_address": { 00:13:53.630 "trtype": "tcp", 00:13:53.630 "traddr": "", 00:13:53.630 "trsvcid": "4421" 00:13:53.630 }, 00:13:53.630 "method": "nvmf_subsystem_remove_listener", 00:13:53.630 "req_id": 1 00:13:53.630 } 00:13:53.630 Got JSON-RPC error response 00:13:53.630 response: 00:13:53.630 { 00:13:53.630 "code": -32602, 00:13:53.630 "message": "Invalid parameters" 00:13:53.630 }' 00:13:53.630 14:16:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:53.630 { 00:13:53.630 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:53.630 "listen_address": { 00:13:53.630 "trtype": "tcp", 00:13:53.630 "traddr": "", 00:13:53.630 "trsvcid": "4421" 00:13:53.630 }, 00:13:53.630 "method": "nvmf_subsystem_remove_listener", 00:13:53.630 "req_id": 1 00:13:53.630 } 00:13:53.630 Got JSON-RPC error response 00:13:53.630 response: 00:13:53.630 { 00:13:53.630 "code": -32602, 00:13:53.630 "message": "Invalid parameters" 00:13:53.630 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:53.630 14:16:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5303 -i 0 00:13:53.887 [2024-07-10 14:16:03.226232] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5303: invalid cntlid range [0-65519] 00:13:53.887 14:16:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:53.887 { 00:13:53.887 "nqn": "nqn.2016-06.io.spdk:cnode5303", 00:13:53.887 "min_cntlid": 0, 00:13:53.887 "method": "nvmf_create_subsystem", 00:13:53.887 "req_id": 1 00:13:53.887 } 00:13:53.887 Got JSON-RPC error response 00:13:53.887 response: 00:13:53.887 { 00:13:53.887 "code": -32602, 00:13:53.887 "message": "Invalid cntlid range [0-65519]" 00:13:53.887 }' 00:13:53.887 14:16:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:53.887 { 00:13:53.887 "nqn": "nqn.2016-06.io.spdk:cnode5303", 00:13:53.887 "min_cntlid": 0, 00:13:53.887 "method": "nvmf_create_subsystem", 00:13:53.887 "req_id": 1 00:13:53.887 } 00:13:53.887 Got JSON-RPC error response 00:13:53.887 response: 00:13:53.887 { 00:13:53.887 "code": -32602, 00:13:53.887 "message": "Invalid cntlid range [0-65519]" 00:13:53.887 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:53.887 14:16:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19726 -i 65520 00:13:54.144 [2024-07-10 14:16:03.487111] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19726: invalid cntlid range [65520-65519] 00:13:54.144 14:16:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:54.144 { 00:13:54.144 "nqn": "nqn.2016-06.io.spdk:cnode19726", 00:13:54.144 "min_cntlid": 65520, 00:13:54.144 "method": "nvmf_create_subsystem", 00:13:54.144 "req_id": 1 00:13:54.144 } 00:13:54.144 Got JSON-RPC error response 00:13:54.144 response: 00:13:54.144 { 00:13:54.144 "code": -32602, 00:13:54.144 "message": "Invalid cntlid range [65520-65519]" 00:13:54.144 }' 00:13:54.144 14:16:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:54.144 { 00:13:54.144 "nqn": "nqn.2016-06.io.spdk:cnode19726", 00:13:54.144 "min_cntlid": 65520, 00:13:54.144 "method": "nvmf_create_subsystem", 00:13:54.144 "req_id": 1 00:13:54.144 } 00:13:54.144 Got JSON-RPC error response 00:13:54.144 response: 00:13:54.144 { 00:13:54.144 "code": -32602, 00:13:54.144 "message": "Invalid cntlid range [65520-65519]" 00:13:54.144 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:54.144 14:16:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10607 -I 0 00:13:54.402 [2024-07-10 14:16:03.735913] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10607: invalid cntlid range [1-0] 00:13:54.402 14:16:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:54.402 { 00:13:54.402 "nqn": "nqn.2016-06.io.spdk:cnode10607", 00:13:54.402 "max_cntlid": 0, 00:13:54.402 "method": "nvmf_create_subsystem", 00:13:54.402 "req_id": 1 00:13:54.402 } 00:13:54.402 Got JSON-RPC error response 00:13:54.402 response: 00:13:54.402 { 00:13:54.402 "code": -32602, 00:13:54.402 "message": "Invalid cntlid range [1-0]" 00:13:54.402 }' 00:13:54.402 14:16:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:54.402 { 00:13:54.402 "nqn": "nqn.2016-06.io.spdk:cnode10607", 00:13:54.402 "max_cntlid": 0, 00:13:54.402 "method": "nvmf_create_subsystem", 00:13:54.402 "req_id": 1 00:13:54.402 } 00:13:54.402 Got JSON-RPC error response 00:13:54.402 response: 00:13:54.402 { 00:13:54.402 "code": -32602, 00:13:54.402 "message": "Invalid cntlid range [1-0]" 00:13:54.402 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:54.402 14:16:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13901 -I 65520 00:13:54.660 [2024-07-10 14:16:03.984834] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13901: invalid cntlid range [1-65520] 00:13:54.660 14:16:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:54.660 { 00:13:54.660 "nqn": "nqn.2016-06.io.spdk:cnode13901", 00:13:54.660 "max_cntlid": 65520, 00:13:54.660 "method": "nvmf_create_subsystem", 00:13:54.660 "req_id": 1 00:13:54.660 } 00:13:54.660 Got JSON-RPC error response 00:13:54.660 response: 00:13:54.660 { 00:13:54.660 "code": -32602, 00:13:54.660 "message": "Invalid cntlid range [1-65520]" 00:13:54.660 }' 00:13:54.660 14:16:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:54.660 { 00:13:54.660 "nqn": "nqn.2016-06.io.spdk:cnode13901", 00:13:54.660 "max_cntlid": 65520, 00:13:54.660 "method": "nvmf_create_subsystem", 00:13:54.660 "req_id": 1 00:13:54.660 } 00:13:54.660 Got JSON-RPC error response 00:13:54.660 response: 00:13:54.660 { 00:13:54.660 "code": -32602, 00:13:54.660 "message": "Invalid cntlid range [1-65520]" 00:13:54.660 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:54.660 14:16:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6018 -i 6 -I 5 00:13:54.918 [2024-07-10 14:16:04.221645] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6018: invalid cntlid range [6-5] 00:13:54.918 14:16:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:54.918 { 00:13:54.918 "nqn": "nqn.2016-06.io.spdk:cnode6018", 00:13:54.918 "min_cntlid": 6, 00:13:54.918 "max_cntlid": 5, 00:13:54.918 "method": "nvmf_create_subsystem", 00:13:54.918 "req_id": 1 00:13:54.918 } 00:13:54.918 Got JSON-RPC error response 00:13:54.918 response: 00:13:54.918 { 00:13:54.918 "code": -32602, 00:13:54.918 "message": "Invalid cntlid range [6-5]" 00:13:54.918 }' 00:13:54.918 14:16:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:54.918 { 00:13:54.918 "nqn": "nqn.2016-06.io.spdk:cnode6018", 00:13:54.918 "min_cntlid": 6, 00:13:54.918 "max_cntlid": 5, 00:13:54.918 "method": "nvmf_create_subsystem", 00:13:54.918 "req_id": 1 00:13:54.918 } 00:13:54.918 Got JSON-RPC error response 00:13:54.918 response: 00:13:54.918 { 00:13:54.918 "code": -32602, 00:13:54.918 "message": "Invalid cntlid range [6-5]" 00:13:54.918 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:54.918 14:16:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:54.918 14:16:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:54.918 { 00:13:54.918 "name": "foobar", 00:13:54.918 "method": "nvmf_delete_target", 00:13:54.918 "req_id": 1 00:13:54.918 } 00:13:54.918 Got JSON-RPC error response 00:13:54.918 response: 00:13:54.918 { 00:13:54.918 "code": -32602, 00:13:54.918 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:54.918 }' 00:13:54.918 14:16:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:54.918 { 00:13:54.918 "name": "foobar", 00:13:54.918 "method": "nvmf_delete_target", 00:13:54.918 "req_id": 1 00:13:54.918 } 00:13:54.918 Got JSON-RPC error response 00:13:54.918 response: 00:13:54.918 { 00:13:54.918 "code": -32602, 00:13:54.918 "message": "The specified target doesn't exist, cannot delete it." 00:13:54.918 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:54.918 14:16:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:54.918 14:16:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:54.918 14:16:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:54.918 14:16:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:54.918 14:16:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:54.918 14:16:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:54.918 14:16:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:54.918 14:16:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:54.918 rmmod nvme_tcp 00:13:54.918 rmmod nvme_fabrics 00:13:54.918 rmmod nvme_keyring 00:13:55.176 14:16:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:55.176 14:16:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:55.176 14:16:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:55.176 14:16:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1322286 ']' 00:13:55.176 14:16:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1322286 00:13:55.176 14:16:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1322286 ']' 00:13:55.176 14:16:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1322286 00:13:55.176 14:16:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:13:55.176 14:16:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:55.176 14:16:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1322286 00:13:55.176 14:16:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:55.176 14:16:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:55.176 14:16:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1322286' 00:13:55.176 killing process with pid 1322286 00:13:55.176 14:16:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1322286 00:13:55.176 14:16:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1322286 00:13:56.550 14:16:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:56.550 14:16:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:56.550 14:16:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:56.550 14:16:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:56.550 14:16:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:56.550 14:16:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.550 14:16:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.550 14:16:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.452 14:16:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:58.452 00:13:58.452 real 0m10.194s 00:13:58.452 user 0m24.169s 00:13:58.452 sys 0m2.599s 00:13:58.452 14:16:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:58.452 14:16:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:58.452 ************************************ 00:13:58.452 END TEST nvmf_invalid 00:13:58.452 ************************************ 00:13:58.452 14:16:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:58.452 14:16:07 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:58.452 14:16:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:58.452 14:16:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:58.452 14:16:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:58.452 ************************************ 00:13:58.452 START TEST nvmf_abort 00:13:58.452 ************************************ 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:58.452 * Looking for test storage... 00:13:58.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.452 14:16:07 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:58.453 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:00.351 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:00.352 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:00.352 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:00.352 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:00.352 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:00.352 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:00.611 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:00.611 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:00.611 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:00.611 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:00.611 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:00.611 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:00.611 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:00.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:14:00.612 00:14:00.612 --- 10.0.0.2 ping statistics --- 00:14:00.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.612 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:00.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:14:00.612 00:14:00.612 --- 10.0.0.1 ping statistics --- 00:14:00.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.612 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1325053 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1325053 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1325053 ']' 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:00.612 14:16:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:00.612 [2024-07-10 14:16:10.066826] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:14:00.612 [2024-07-10 14:16:10.066983] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.870 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.870 [2024-07-10 14:16:10.211463] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:01.128 [2024-07-10 14:16:10.478228] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.128 [2024-07-10 14:16:10.478312] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.128 [2024-07-10 14:16:10.478346] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.128 [2024-07-10 14:16:10.478367] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.128 [2024-07-10 14:16:10.478388] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.128 [2024-07-10 14:16:10.478510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.128 [2024-07-10 14:16:10.478557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.128 [2024-07-10 14:16:10.478568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:01.694 14:16:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:01.694 14:16:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:14:01.694 14:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:01.694 14:16:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:01.694 14:16:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:01.694 [2024-07-10 14:16:11.019204] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:01.694 Malloc0 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:01.694 Delay0 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:01.694 [2024-07-10 14:16:11.146542] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.694 14:16:11 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:01.952 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.952 [2024-07-10 14:16:11.315097] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:04.494 Initializing NVMe Controllers 00:14:04.494 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:04.494 controller IO queue size 128 less than required 00:14:04.494 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:04.494 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:04.494 Initialization complete. Launching workers. 00:14:04.494 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 25706 00:14:04.494 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 25767, failed to submit 66 00:14:04.494 success 25706, unsuccess 61, failed 0 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:04.494 rmmod nvme_tcp 00:14:04.494 rmmod nvme_fabrics 00:14:04.494 rmmod nvme_keyring 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1325053 ']' 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1325053 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1325053 ']' 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1325053 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1325053 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1325053' 00:14:04.494 killing process with pid 1325053 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1325053 00:14:04.494 14:16:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1325053 00:14:05.867 14:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:05.867 14:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:05.867 14:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:05.867 14:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:05.867 14:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:05.867 14:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.867 14:16:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.867 14:16:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.772 14:16:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:07.772 00:14:07.772 real 0m9.239s 00:14:07.772 user 0m14.796s 00:14:07.772 sys 0m2.892s 00:14:07.772 14:16:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:07.772 14:16:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:07.772 ************************************ 00:14:07.772 END TEST nvmf_abort 00:14:07.772 ************************************ 00:14:07.772 14:16:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:07.772 14:16:17 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:07.772 14:16:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:07.772 14:16:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.772 14:16:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:07.772 ************************************ 00:14:07.772 START TEST nvmf_ns_hotplug_stress 00:14:07.772 ************************************ 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:07.772 * Looking for test storage... 00:14:07.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:07.772 14:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.304 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:10.305 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:10.305 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:10.305 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:10.305 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:10.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:14:10.305 00:14:10.305 --- 10.0.0.2 ping statistics --- 00:14:10.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.305 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:14:10.305 00:14:10.305 --- 10.0.0.1 ping statistics --- 00:14:10.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.305 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1327551 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1327551 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1327551 ']' 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.305 14:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.305 [2024-07-10 14:16:19.425073] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:14:10.305 [2024-07-10 14:16:19.425242] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.305 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.305 [2024-07-10 14:16:19.574709] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:10.563 [2024-07-10 14:16:19.838323] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.563 [2024-07-10 14:16:19.838403] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.563 [2024-07-10 14:16:19.838448] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.563 [2024-07-10 14:16:19.838472] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.563 [2024-07-10 14:16:19.838508] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.563 [2024-07-10 14:16:19.838644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.563 [2024-07-10 14:16:19.838691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.563 [2024-07-10 14:16:19.838712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.127 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.127 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:14:11.127 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:11.127 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:11.127 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.127 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.127 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:14:11.127 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:11.384 [2024-07-10 14:16:20.610292] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.384 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:11.640 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.640 [2024-07-10 14:16:21.109536] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.897 14:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:11.897 14:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:12.461 Malloc0 00:14:12.461 14:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:12.461 Delay0 00:14:12.461 14:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.718 14:16:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:12.975 NULL1 00:14:12.975 14:16:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:13.232 14:16:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1327965 00:14:13.232 14:16:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:13.232 14:16:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:13.232 14:16:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.511 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.475 Read completed with error (sct=0, sc=11) 00:14:14.475 14:16:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:14.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:14.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:14.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:14.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:14.991 14:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:14.991 14:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:15.249 true 00:14:15.249 14:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:15.249 14:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.814 14:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.378 14:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:16.378 14:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:16.378 true 00:14:16.378 14:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:16.379 14:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.637 14:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.895 14:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:16.895 14:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:17.153 true 00:14:17.153 14:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:17.153 14:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.410 14:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.668 14:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:17.668 14:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:17.925 true 00:14:17.925 14:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:17.925 14:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.296 14:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.296 14:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:19.296 14:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:19.554 true 00:14:19.554 14:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:19.554 14:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.811 14:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.069 14:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:20.069 14:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:20.327 true 00:14:20.327 14:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:20.327 14:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.261 14:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:21.519 14:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:21.519 14:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:21.519 true 00:14:21.519 14:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:21.519 14:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.775 14:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.034 14:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:22.034 14:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:22.292 true 00:14:22.292 14:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:22.292 14:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.226 14:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.226 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.484 14:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:23.484 14:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:23.741 true 00:14:23.741 14:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:23.741 14:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.998 14:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.256 14:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:24.256 14:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:24.514 true 00:14:24.514 14:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:24.514 14:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.449 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:25.449 14:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.707 14:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:25.707 14:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:25.964 true 00:14:25.964 14:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:25.964 14:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.222 14:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.480 14:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:26.480 14:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:26.480 true 00:14:26.480 14:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:26.480 14:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.855 14:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.855 14:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:27.855 14:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:28.113 true 00:14:28.113 14:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:28.114 14:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.371 14:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.629 14:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:28.629 14:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:28.887 true 00:14:28.887 14:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:28.887 14:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.145 14:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.403 14:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:29.403 14:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:29.660 true 00:14:29.660 14:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:29.660 14:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:30.593 14:16:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:30.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:30.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:30.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:30.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:30.851 14:16:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:30.851 14:16:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:31.417 true 00:14:31.417 14:16:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:31.417 14:16:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:31.982 14:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:32.240 14:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:32.240 14:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:32.498 true 00:14:32.498 14:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:32.498 14:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.755 14:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.013 14:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:33.013 14:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:33.272 true 00:14:33.272 14:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:33.272 14:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.204 14:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:34.461 14:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:34.461 14:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:34.719 true 00:14:34.719 14:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:34.719 14:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.977 14:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.234 14:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:35.234 14:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:35.234 true 00:14:35.492 14:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:35.492 14:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:36.423 14:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:36.423 14:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:36.423 14:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:36.680 true 00:14:36.680 14:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:36.680 14:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.938 14:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.195 14:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:37.195 14:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:37.453 true 00:14:37.453 14:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:37.453 14:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:38.385 14:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.643 14:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:38.643 14:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:38.900 true 00:14:38.900 14:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:38.900 14:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.158 14:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:39.415 14:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:39.415 14:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:39.673 true 00:14:39.673 14:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:39.673 14:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:40.605 14:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.605 14:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:40.605 14:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:40.862 true 00:14:40.862 14:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:40.862 14:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.121 14:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.413 14:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:41.413 14:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:41.696 true 00:14:41.697 14:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:41.697 14:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:42.630 14:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:42.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:42.887 14:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:42.887 14:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:43.145 true 00:14:43.145 14:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:43.145 14:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.403 14:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.668 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:43.668 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:43.668 Initializing NVMe Controllers 00:14:43.668 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:43.668 Controller IO queue size 128, less than required. 00:14:43.668 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:43.668 Controller IO queue size 128, less than required. 00:14:43.668 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:43.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:43.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:43.668 Initialization complete. Launching workers. 00:14:43.668 ======================================================== 00:14:43.668 Latency(us) 00:14:43.668 Device Information : IOPS MiB/s Average min max 00:14:43.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 706.95 0.35 96565.38 3373.36 1021392.36 00:14:43.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8721.05 4.26 14676.62 3994.43 477317.90 00:14:43.668 ======================================================== 00:14:43.668 Total : 9428.00 4.60 20816.97 3373.36 1021392.36 00:14:43.668 00:14:43.926 true 00:14:43.926 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1327965 00:14:43.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1327965) - No such process 00:14:43.926 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1327965 00:14:43.926 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.184 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:44.442 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:44.442 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:44.442 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:44.442 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:44.442 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:44.700 null0 00:14:44.700 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:44.700 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:44.700 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:44.957 null1 00:14:44.957 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:44.957 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:44.957 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:45.215 null2 00:14:45.215 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:45.215 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:45.215 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:45.473 null3 00:14:45.473 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:45.473 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:45.473 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:45.473 null4 00:14:45.731 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:45.731 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:45.731 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:45.990 null5 00:14:45.990 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:45.990 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:45.990 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:45.990 null6 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:46.248 null7 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:46.248 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:46.249 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:46.249 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:46.249 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:46.249 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:46.249 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:46.249 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:46.249 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:46.249 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1331995 1331996 1331998 1332000 1332002 1332004 1332006 1332008 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:46.507 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:46.765 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:46.765 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.765 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:46.765 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:46.765 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:46.765 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:46.765 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:46.765 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:47.023 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:47.281 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.281 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:47.281 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:47.281 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:47.281 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:47.281 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:47.281 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:47.282 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:47.539 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:47.797 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:47.797 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:47.797 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.797 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:47.797 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:47.797 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:47.797 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:47.797 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:48.055 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:48.055 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:48.055 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:48.055 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:48.055 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:48.055 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:48.055 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:48.055 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:48.055 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:48.055 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:48.055 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:48.055 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:48.055 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:48.055 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:48.055 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:48.055 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:48.055 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:48.055 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:48.056 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:48.056 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:48.056 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:48.056 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:48.056 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:48.056 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:48.313 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:48.313 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:48.313 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.313 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:48.313 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:48.313 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:48.313 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:48.313 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:48.571 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:48.829 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:48.829 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.829 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:48.829 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:48.829 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:48.829 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:48.829 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:48.829 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:49.087 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:49.087 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:49.087 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:49.087 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:49.087 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:49.087 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:49.087 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:49.087 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:49.087 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:49.087 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:49.087 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:49.087 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:49.088 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:49.088 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:49.088 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:49.088 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:49.088 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:49.088 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:49.088 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:49.088 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:49.088 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:49.088 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:49.088 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:49.088 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:49.346 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:49.346 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:49.346 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:49.346 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:49.346 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:49.346 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:49.346 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:49.346 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:49.604 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:49.862 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:49.862 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:49.862 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:49.862 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.862 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:49.862 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:49.862 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:49.862 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:50.120 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:50.377 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:50.377 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:50.377 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:50.377 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:50.377 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.377 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:50.377 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:50.377 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:50.636 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:50.636 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:50.636 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:50.636 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:50.636 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:50.636 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:50.894 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:50.894 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:50.894 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:50.894 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:50.894 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:50.894 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:50.894 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:50.894 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:50.894 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:50.894 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:50.894 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:50.894 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:50.894 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:50.894 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:50.894 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:50.894 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:50.894 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:50.894 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:51.152 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:51.152 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:51.152 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:51.152 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:51.152 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.152 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:51.152 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:51.152 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:51.411 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:51.669 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:51.669 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:51.669 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:51.669 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.669 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:51.669 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:51.669 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:51.669 14:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:51.927 rmmod nvme_tcp 00:14:51.927 rmmod nvme_fabrics 00:14:51.927 rmmod nvme_keyring 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1327551 ']' 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1327551 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1327551 ']' 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1327551 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1327551 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1327551' 00:14:51.927 killing process with pid 1327551 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1327551 00:14:51.927 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1327551 00:14:53.301 14:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:53.301 14:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:53.301 14:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:53.301 14:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.301 14:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:53.301 14:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.301 14:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.301 14:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.203 14:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:55.203 00:14:55.203 real 0m47.596s 00:14:55.203 user 3m26.703s 00:14:55.203 sys 0m18.670s 00:14:55.203 14:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:55.203 14:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.203 ************************************ 00:14:55.203 END TEST nvmf_ns_hotplug_stress 00:14:55.203 ************************************ 00:14:55.461 14:17:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:55.461 14:17:04 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:55.461 14:17:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:55.461 14:17:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.461 14:17:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:55.461 ************************************ 00:14:55.461 START TEST nvmf_connect_stress 00:14:55.461 ************************************ 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:55.461 * Looking for test storage... 00:14:55.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:55.461 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:55.462 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:55.462 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.462 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.462 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.462 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:55.462 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:55.462 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:55.462 14:17:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:55.462 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:55.462 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.462 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:55.462 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:55.462 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:55.462 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.462 14:17:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.462 14:17:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.462 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:55.462 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:55.462 14:17:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:55.462 14:17:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:57.358 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:57.358 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:57.358 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:57.358 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:57.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:14:57.358 00:14:57.358 --- 10.0.0.2 ping statistics --- 00:14:57.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.358 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:57.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:14:57.358 00:14:57.358 --- 10.0.0.1 ping statistics --- 00:14:57.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.358 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1334877 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1334877 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1334877 ']' 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.358 14:17:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.615 [2024-07-10 14:17:06.905846] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:14:57.615 [2024-07-10 14:17:06.905976] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.615 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.615 [2024-07-10 14:17:07.045550] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:57.872 [2024-07-10 14:17:07.305256] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.872 [2024-07-10 14:17:07.305336] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.872 [2024-07-10 14:17:07.305370] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.872 [2024-07-10 14:17:07.305392] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.872 [2024-07-10 14:17:07.305415] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.872 [2024-07-10 14:17:07.305572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.872 [2024-07-10 14:17:07.305643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.872 [2024-07-10 14:17:07.305666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.436 [2024-07-10 14:17:07.820154] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.436 [2024-07-10 14:17:07.847946] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.436 NULL1 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1335029 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.436 14:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.693 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.950 14:17:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.950 14:17:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:14:58.950 14:17:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.950 14:17:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.950 14:17:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.209 14:17:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.209 14:17:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:14:59.209 14:17:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.209 14:17:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.209 14:17:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.467 14:17:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.467 14:17:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:14:59.467 14:17:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.467 14:17:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.467 14:17:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.725 14:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.725 14:17:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:14:59.725 14:17:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.725 14:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.725 14:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.290 14:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.290 14:17:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:00.290 14:17:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.290 14:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.290 14:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.547 14:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.547 14:17:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:00.547 14:17:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.547 14:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.547 14:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.804 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.804 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:00.804 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.804 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.804 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.061 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.061 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:01.061 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.061 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.061 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.628 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.628 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:01.628 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.628 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.628 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.886 14:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.886 14:17:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:01.886 14:17:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.886 14:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.886 14:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.143 14:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.143 14:17:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:02.143 14:17:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.144 14:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.144 14:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.401 14:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.401 14:17:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:02.401 14:17:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.401 14:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.401 14:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.659 14:17:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.659 14:17:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:02.659 14:17:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.659 14:17:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.659 14:17:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.225 14:17:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.225 14:17:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:03.225 14:17:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.225 14:17:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.225 14:17:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.483 14:17:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.483 14:17:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:03.483 14:17:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.483 14:17:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.483 14:17:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.749 14:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.749 14:17:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:03.749 14:17:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.749 14:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.749 14:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.006 14:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.006 14:17:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:04.006 14:17:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.006 14:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.006 14:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.264 14:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.264 14:17:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:04.264 14:17:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.264 14:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.264 14:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.830 14:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.830 14:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:04.830 14:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.830 14:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.830 14:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.087 14:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.087 14:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:05.087 14:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.087 14:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.087 14:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.346 14:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.346 14:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:05.346 14:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.346 14:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.346 14:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.604 14:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.604 14:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:05.604 14:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.604 14:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.604 14:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.169 14:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.169 14:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:06.169 14:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.169 14:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.169 14:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.427 14:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.427 14:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:06.427 14:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.427 14:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.427 14:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.684 14:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.684 14:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:06.684 14:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.685 14:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.685 14:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.942 14:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.942 14:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:06.942 14:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.942 14:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.942 14:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.200 14:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.200 14:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:07.200 14:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.200 14:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.200 14:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.766 14:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.766 14:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:07.766 14:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.766 14:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.766 14:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.050 14:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.050 14:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:08.050 14:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.050 14:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.050 14:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.332 14:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.332 14:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:08.332 14:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.332 14:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.332 14:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.616 14:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.616 14:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:08.616 14:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.616 14:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.616 14:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.616 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1335029 00:15:08.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1335029) - No such process 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1335029 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:08.873 rmmod nvme_tcp 00:15:08.873 rmmod nvme_fabrics 00:15:08.873 rmmod nvme_keyring 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1334877 ']' 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1334877 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1334877 ']' 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1334877 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:08.873 14:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1334877 00:15:09.130 14:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:09.130 14:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:09.130 14:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1334877' 00:15:09.130 killing process with pid 1334877 00:15:09.130 14:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1334877 00:15:09.130 14:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1334877 00:15:10.500 14:17:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:10.500 14:17:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:10.500 14:17:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:10.500 14:17:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:10.500 14:17:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:10.500 14:17:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.500 14:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.500 14:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.402 14:17:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:12.402 00:15:12.402 real 0m16.910s 00:15:12.402 user 0m42.112s 00:15:12.402 sys 0m5.824s 00:15:12.402 14:17:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:12.402 14:17:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.402 ************************************ 00:15:12.402 END TEST nvmf_connect_stress 00:15:12.402 ************************************ 00:15:12.402 14:17:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:12.402 14:17:21 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:12.402 14:17:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:12.402 14:17:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:12.402 14:17:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:12.402 ************************************ 00:15:12.402 START TEST nvmf_fused_ordering 00:15:12.402 ************************************ 00:15:12.402 14:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:12.402 * Looking for test storage... 00:15:12.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:12.402 14:17:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:12.402 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:12.402 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.402 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.402 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.402 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.402 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.402 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.402 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.402 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.402 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.402 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.402 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:12.402 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:12.402 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.402 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:15:12.403 14:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:14.300 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:14.301 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:14.301 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:14.301 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:14.301 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:14.301 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:14.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:14.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:15:14.559 00:15:14.559 --- 10.0.0.2 ping statistics --- 00:15:14.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.559 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:14.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:14.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:15:14.559 00:15:14.559 --- 10.0.0.1 ping statistics --- 00:15:14.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.559 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1338311 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1338311 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1338311 ']' 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:14.559 14:17:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.559 [2024-07-10 14:17:23.933682] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:15:14.559 [2024-07-10 14:17:23.933855] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.559 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.818 [2024-07-10 14:17:24.069119] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.076 [2024-07-10 14:17:24.319709] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.076 [2024-07-10 14:17:24.319779] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.076 [2024-07-10 14:17:24.319807] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.076 [2024-07-10 14:17:24.319832] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.076 [2024-07-10 14:17:24.319853] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.076 [2024-07-10 14:17:24.319901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:15.641 [2024-07-10 14:17:24.871996] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:15.641 [2024-07-10 14:17:24.888224] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:15.641 NULL1 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:15.641 14:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.642 14:17:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:15.642 [2024-07-10 14:17:24.958840] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:15:15.642 [2024-07-10 14:17:24.958931] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1338461 ] 00:15:15.642 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.208 Attached to nqn.2016-06.io.spdk:cnode1 00:15:16.208 Namespace ID: 1 size: 1GB 00:15:16.208 fused_ordering(0) 00:15:16.208 fused_ordering(1) 00:15:16.208 fused_ordering(2) 00:15:16.208 fused_ordering(3) 00:15:16.208 fused_ordering(4) 00:15:16.208 fused_ordering(5) 00:15:16.208 fused_ordering(6) 00:15:16.208 fused_ordering(7) 00:15:16.208 fused_ordering(8) 00:15:16.208 fused_ordering(9) 00:15:16.208 fused_ordering(10) 00:15:16.208 fused_ordering(11) 00:15:16.208 fused_ordering(12) 00:15:16.208 fused_ordering(13) 00:15:16.208 fused_ordering(14) 00:15:16.208 fused_ordering(15) 00:15:16.208 fused_ordering(16) 00:15:16.208 fused_ordering(17) 00:15:16.208 fused_ordering(18) 00:15:16.208 fused_ordering(19) 00:15:16.208 fused_ordering(20) 00:15:16.208 fused_ordering(21) 00:15:16.208 fused_ordering(22) 00:15:16.208 fused_ordering(23) 00:15:16.208 fused_ordering(24) 00:15:16.208 fused_ordering(25) 00:15:16.208 fused_ordering(26) 00:15:16.208 fused_ordering(27) 00:15:16.208 fused_ordering(28) 00:15:16.208 fused_ordering(29) 00:15:16.208 fused_ordering(30) 00:15:16.208 fused_ordering(31) 00:15:16.208 fused_ordering(32) 00:15:16.208 fused_ordering(33) 00:15:16.208 fused_ordering(34) 00:15:16.208 fused_ordering(35) 00:15:16.208 fused_ordering(36) 00:15:16.208 fused_ordering(37) 00:15:16.208 fused_ordering(38) 00:15:16.208 fused_ordering(39) 00:15:16.208 fused_ordering(40) 00:15:16.208 fused_ordering(41) 00:15:16.208 fused_ordering(42) 00:15:16.208 fused_ordering(43) 00:15:16.208 fused_ordering(44) 00:15:16.208 fused_ordering(45) 00:15:16.208 fused_ordering(46) 00:15:16.208 fused_ordering(47) 00:15:16.208 fused_ordering(48) 00:15:16.208 fused_ordering(49) 00:15:16.208 fused_ordering(50) 00:15:16.208 fused_ordering(51) 00:15:16.208 fused_ordering(52) 00:15:16.208 fused_ordering(53) 00:15:16.208 fused_ordering(54) 00:15:16.208 fused_ordering(55) 00:15:16.208 fused_ordering(56) 00:15:16.208 fused_ordering(57) 00:15:16.208 fused_ordering(58) 00:15:16.208 fused_ordering(59) 00:15:16.208 fused_ordering(60) 00:15:16.208 fused_ordering(61) 00:15:16.208 fused_ordering(62) 00:15:16.208 fused_ordering(63) 00:15:16.208 fused_ordering(64) 00:15:16.208 fused_ordering(65) 00:15:16.208 fused_ordering(66) 00:15:16.208 fused_ordering(67) 00:15:16.208 fused_ordering(68) 00:15:16.208 fused_ordering(69) 00:15:16.208 fused_ordering(70) 00:15:16.208 fused_ordering(71) 00:15:16.208 fused_ordering(72) 00:15:16.208 fused_ordering(73) 00:15:16.208 fused_ordering(74) 00:15:16.208 fused_ordering(75) 00:15:16.208 fused_ordering(76) 00:15:16.208 fused_ordering(77) 00:15:16.208 fused_ordering(78) 00:15:16.208 fused_ordering(79) 00:15:16.208 fused_ordering(80) 00:15:16.208 fused_ordering(81) 00:15:16.208 fused_ordering(82) 00:15:16.208 fused_ordering(83) 00:15:16.208 fused_ordering(84) 00:15:16.208 fused_ordering(85) 00:15:16.208 fused_ordering(86) 00:15:16.208 fused_ordering(87) 00:15:16.208 fused_ordering(88) 00:15:16.208 fused_ordering(89) 00:15:16.208 fused_ordering(90) 00:15:16.208 fused_ordering(91) 00:15:16.208 fused_ordering(92) 00:15:16.208 fused_ordering(93) 00:15:16.208 fused_ordering(94) 00:15:16.208 fused_ordering(95) 00:15:16.208 fused_ordering(96) 00:15:16.208 fused_ordering(97) 00:15:16.208 fused_ordering(98) 00:15:16.208 fused_ordering(99) 00:15:16.208 fused_ordering(100) 00:15:16.208 fused_ordering(101) 00:15:16.208 fused_ordering(102) 00:15:16.208 fused_ordering(103) 00:15:16.208 fused_ordering(104) 00:15:16.208 fused_ordering(105) 00:15:16.208 fused_ordering(106) 00:15:16.208 fused_ordering(107) 00:15:16.208 fused_ordering(108) 00:15:16.208 fused_ordering(109) 00:15:16.208 fused_ordering(110) 00:15:16.208 fused_ordering(111) 00:15:16.208 fused_ordering(112) 00:15:16.208 fused_ordering(113) 00:15:16.208 fused_ordering(114) 00:15:16.208 fused_ordering(115) 00:15:16.208 fused_ordering(116) 00:15:16.208 fused_ordering(117) 00:15:16.208 fused_ordering(118) 00:15:16.208 fused_ordering(119) 00:15:16.208 fused_ordering(120) 00:15:16.208 fused_ordering(121) 00:15:16.208 fused_ordering(122) 00:15:16.208 fused_ordering(123) 00:15:16.208 fused_ordering(124) 00:15:16.208 fused_ordering(125) 00:15:16.208 fused_ordering(126) 00:15:16.208 fused_ordering(127) 00:15:16.208 fused_ordering(128) 00:15:16.208 fused_ordering(129) 00:15:16.208 fused_ordering(130) 00:15:16.208 fused_ordering(131) 00:15:16.208 fused_ordering(132) 00:15:16.208 fused_ordering(133) 00:15:16.208 fused_ordering(134) 00:15:16.208 fused_ordering(135) 00:15:16.208 fused_ordering(136) 00:15:16.208 fused_ordering(137) 00:15:16.208 fused_ordering(138) 00:15:16.208 fused_ordering(139) 00:15:16.208 fused_ordering(140) 00:15:16.208 fused_ordering(141) 00:15:16.208 fused_ordering(142) 00:15:16.208 fused_ordering(143) 00:15:16.208 fused_ordering(144) 00:15:16.208 fused_ordering(145) 00:15:16.208 fused_ordering(146) 00:15:16.208 fused_ordering(147) 00:15:16.208 fused_ordering(148) 00:15:16.208 fused_ordering(149) 00:15:16.208 fused_ordering(150) 00:15:16.208 fused_ordering(151) 00:15:16.208 fused_ordering(152) 00:15:16.208 fused_ordering(153) 00:15:16.208 fused_ordering(154) 00:15:16.208 fused_ordering(155) 00:15:16.208 fused_ordering(156) 00:15:16.208 fused_ordering(157) 00:15:16.208 fused_ordering(158) 00:15:16.208 fused_ordering(159) 00:15:16.208 fused_ordering(160) 00:15:16.208 fused_ordering(161) 00:15:16.208 fused_ordering(162) 00:15:16.208 fused_ordering(163) 00:15:16.208 fused_ordering(164) 00:15:16.208 fused_ordering(165) 00:15:16.208 fused_ordering(166) 00:15:16.208 fused_ordering(167) 00:15:16.208 fused_ordering(168) 00:15:16.208 fused_ordering(169) 00:15:16.208 fused_ordering(170) 00:15:16.208 fused_ordering(171) 00:15:16.208 fused_ordering(172) 00:15:16.208 fused_ordering(173) 00:15:16.208 fused_ordering(174) 00:15:16.208 fused_ordering(175) 00:15:16.208 fused_ordering(176) 00:15:16.208 fused_ordering(177) 00:15:16.208 fused_ordering(178) 00:15:16.208 fused_ordering(179) 00:15:16.208 fused_ordering(180) 00:15:16.208 fused_ordering(181) 00:15:16.208 fused_ordering(182) 00:15:16.208 fused_ordering(183) 00:15:16.208 fused_ordering(184) 00:15:16.208 fused_ordering(185) 00:15:16.208 fused_ordering(186) 00:15:16.208 fused_ordering(187) 00:15:16.208 fused_ordering(188) 00:15:16.208 fused_ordering(189) 00:15:16.208 fused_ordering(190) 00:15:16.208 fused_ordering(191) 00:15:16.208 fused_ordering(192) 00:15:16.208 fused_ordering(193) 00:15:16.208 fused_ordering(194) 00:15:16.209 fused_ordering(195) 00:15:16.209 fused_ordering(196) 00:15:16.209 fused_ordering(197) 00:15:16.209 fused_ordering(198) 00:15:16.209 fused_ordering(199) 00:15:16.209 fused_ordering(200) 00:15:16.209 fused_ordering(201) 00:15:16.209 fused_ordering(202) 00:15:16.209 fused_ordering(203) 00:15:16.209 fused_ordering(204) 00:15:16.209 fused_ordering(205) 00:15:16.776 fused_ordering(206) 00:15:16.776 fused_ordering(207) 00:15:16.776 fused_ordering(208) 00:15:16.776 fused_ordering(209) 00:15:16.776 fused_ordering(210) 00:15:16.776 fused_ordering(211) 00:15:16.776 fused_ordering(212) 00:15:16.776 fused_ordering(213) 00:15:16.776 fused_ordering(214) 00:15:16.776 fused_ordering(215) 00:15:16.776 fused_ordering(216) 00:15:16.776 fused_ordering(217) 00:15:16.776 fused_ordering(218) 00:15:16.776 fused_ordering(219) 00:15:16.776 fused_ordering(220) 00:15:16.776 fused_ordering(221) 00:15:16.776 fused_ordering(222) 00:15:16.776 fused_ordering(223) 00:15:16.776 fused_ordering(224) 00:15:16.776 fused_ordering(225) 00:15:16.776 fused_ordering(226) 00:15:16.776 fused_ordering(227) 00:15:16.776 fused_ordering(228) 00:15:16.776 fused_ordering(229) 00:15:16.776 fused_ordering(230) 00:15:16.776 fused_ordering(231) 00:15:16.776 fused_ordering(232) 00:15:16.776 fused_ordering(233) 00:15:16.776 fused_ordering(234) 00:15:16.776 fused_ordering(235) 00:15:16.776 fused_ordering(236) 00:15:16.776 fused_ordering(237) 00:15:16.776 fused_ordering(238) 00:15:16.776 fused_ordering(239) 00:15:16.776 fused_ordering(240) 00:15:16.776 fused_ordering(241) 00:15:16.776 fused_ordering(242) 00:15:16.776 fused_ordering(243) 00:15:16.776 fused_ordering(244) 00:15:16.776 fused_ordering(245) 00:15:16.776 fused_ordering(246) 00:15:16.776 fused_ordering(247) 00:15:16.776 fused_ordering(248) 00:15:16.776 fused_ordering(249) 00:15:16.776 fused_ordering(250) 00:15:16.776 fused_ordering(251) 00:15:16.776 fused_ordering(252) 00:15:16.776 fused_ordering(253) 00:15:16.776 fused_ordering(254) 00:15:16.776 fused_ordering(255) 00:15:16.776 fused_ordering(256) 00:15:16.776 fused_ordering(257) 00:15:16.776 fused_ordering(258) 00:15:16.776 fused_ordering(259) 00:15:16.776 fused_ordering(260) 00:15:16.776 fused_ordering(261) 00:15:16.776 fused_ordering(262) 00:15:16.776 fused_ordering(263) 00:15:16.776 fused_ordering(264) 00:15:16.776 fused_ordering(265) 00:15:16.776 fused_ordering(266) 00:15:16.776 fused_ordering(267) 00:15:16.776 fused_ordering(268) 00:15:16.776 fused_ordering(269) 00:15:16.776 fused_ordering(270) 00:15:16.776 fused_ordering(271) 00:15:16.776 fused_ordering(272) 00:15:16.776 fused_ordering(273) 00:15:16.776 fused_ordering(274) 00:15:16.776 fused_ordering(275) 00:15:16.776 fused_ordering(276) 00:15:16.776 fused_ordering(277) 00:15:16.776 fused_ordering(278) 00:15:16.776 fused_ordering(279) 00:15:16.776 fused_ordering(280) 00:15:16.776 fused_ordering(281) 00:15:16.776 fused_ordering(282) 00:15:16.776 fused_ordering(283) 00:15:16.776 fused_ordering(284) 00:15:16.776 fused_ordering(285) 00:15:16.776 fused_ordering(286) 00:15:16.776 fused_ordering(287) 00:15:16.776 fused_ordering(288) 00:15:16.776 fused_ordering(289) 00:15:16.776 fused_ordering(290) 00:15:16.776 fused_ordering(291) 00:15:16.776 fused_ordering(292) 00:15:16.776 fused_ordering(293) 00:15:16.776 fused_ordering(294) 00:15:16.776 fused_ordering(295) 00:15:16.776 fused_ordering(296) 00:15:16.776 fused_ordering(297) 00:15:16.776 fused_ordering(298) 00:15:16.776 fused_ordering(299) 00:15:16.776 fused_ordering(300) 00:15:16.776 fused_ordering(301) 00:15:16.776 fused_ordering(302) 00:15:16.776 fused_ordering(303) 00:15:16.776 fused_ordering(304) 00:15:16.776 fused_ordering(305) 00:15:16.776 fused_ordering(306) 00:15:16.776 fused_ordering(307) 00:15:16.776 fused_ordering(308) 00:15:16.776 fused_ordering(309) 00:15:16.776 fused_ordering(310) 00:15:16.776 fused_ordering(311) 00:15:16.776 fused_ordering(312) 00:15:16.776 fused_ordering(313) 00:15:16.776 fused_ordering(314) 00:15:16.776 fused_ordering(315) 00:15:16.776 fused_ordering(316) 00:15:16.776 fused_ordering(317) 00:15:16.776 fused_ordering(318) 00:15:16.776 fused_ordering(319) 00:15:16.776 fused_ordering(320) 00:15:16.776 fused_ordering(321) 00:15:16.776 fused_ordering(322) 00:15:16.776 fused_ordering(323) 00:15:16.776 fused_ordering(324) 00:15:16.776 fused_ordering(325) 00:15:16.776 fused_ordering(326) 00:15:16.776 fused_ordering(327) 00:15:16.776 fused_ordering(328) 00:15:16.776 fused_ordering(329) 00:15:16.776 fused_ordering(330) 00:15:16.776 fused_ordering(331) 00:15:16.776 fused_ordering(332) 00:15:16.776 fused_ordering(333) 00:15:16.776 fused_ordering(334) 00:15:16.776 fused_ordering(335) 00:15:16.776 fused_ordering(336) 00:15:16.776 fused_ordering(337) 00:15:16.776 fused_ordering(338) 00:15:16.776 fused_ordering(339) 00:15:16.776 fused_ordering(340) 00:15:16.776 fused_ordering(341) 00:15:16.776 fused_ordering(342) 00:15:16.776 fused_ordering(343) 00:15:16.776 fused_ordering(344) 00:15:16.776 fused_ordering(345) 00:15:16.776 fused_ordering(346) 00:15:16.776 fused_ordering(347) 00:15:16.776 fused_ordering(348) 00:15:16.776 fused_ordering(349) 00:15:16.776 fused_ordering(350) 00:15:16.776 fused_ordering(351) 00:15:16.776 fused_ordering(352) 00:15:16.776 fused_ordering(353) 00:15:16.776 fused_ordering(354) 00:15:16.776 fused_ordering(355) 00:15:16.776 fused_ordering(356) 00:15:16.776 fused_ordering(357) 00:15:16.776 fused_ordering(358) 00:15:16.776 fused_ordering(359) 00:15:16.776 fused_ordering(360) 00:15:16.776 fused_ordering(361) 00:15:16.776 fused_ordering(362) 00:15:16.776 fused_ordering(363) 00:15:16.776 fused_ordering(364) 00:15:16.776 fused_ordering(365) 00:15:16.776 fused_ordering(366) 00:15:16.776 fused_ordering(367) 00:15:16.776 fused_ordering(368) 00:15:16.776 fused_ordering(369) 00:15:16.776 fused_ordering(370) 00:15:16.776 fused_ordering(371) 00:15:16.776 fused_ordering(372) 00:15:16.776 fused_ordering(373) 00:15:16.776 fused_ordering(374) 00:15:16.776 fused_ordering(375) 00:15:16.776 fused_ordering(376) 00:15:16.776 fused_ordering(377) 00:15:16.776 fused_ordering(378) 00:15:16.776 fused_ordering(379) 00:15:16.776 fused_ordering(380) 00:15:16.776 fused_ordering(381) 00:15:16.776 fused_ordering(382) 00:15:16.776 fused_ordering(383) 00:15:16.776 fused_ordering(384) 00:15:16.776 fused_ordering(385) 00:15:16.776 fused_ordering(386) 00:15:16.776 fused_ordering(387) 00:15:16.776 fused_ordering(388) 00:15:16.776 fused_ordering(389) 00:15:16.776 fused_ordering(390) 00:15:16.776 fused_ordering(391) 00:15:16.776 fused_ordering(392) 00:15:16.776 fused_ordering(393) 00:15:16.776 fused_ordering(394) 00:15:16.776 fused_ordering(395) 00:15:16.776 fused_ordering(396) 00:15:16.776 fused_ordering(397) 00:15:16.776 fused_ordering(398) 00:15:16.776 fused_ordering(399) 00:15:16.776 fused_ordering(400) 00:15:16.776 fused_ordering(401) 00:15:16.776 fused_ordering(402) 00:15:16.776 fused_ordering(403) 00:15:16.776 fused_ordering(404) 00:15:16.776 fused_ordering(405) 00:15:16.776 fused_ordering(406) 00:15:16.776 fused_ordering(407) 00:15:16.776 fused_ordering(408) 00:15:16.776 fused_ordering(409) 00:15:16.776 fused_ordering(410) 00:15:17.343 fused_ordering(411) 00:15:17.343 fused_ordering(412) 00:15:17.343 fused_ordering(413) 00:15:17.343 fused_ordering(414) 00:15:17.343 fused_ordering(415) 00:15:17.343 fused_ordering(416) 00:15:17.343 fused_ordering(417) 00:15:17.343 fused_ordering(418) 00:15:17.343 fused_ordering(419) 00:15:17.343 fused_ordering(420) 00:15:17.343 fused_ordering(421) 00:15:17.343 fused_ordering(422) 00:15:17.343 fused_ordering(423) 00:15:17.343 fused_ordering(424) 00:15:17.343 fused_ordering(425) 00:15:17.343 fused_ordering(426) 00:15:17.343 fused_ordering(427) 00:15:17.343 fused_ordering(428) 00:15:17.343 fused_ordering(429) 00:15:17.343 fused_ordering(430) 00:15:17.343 fused_ordering(431) 00:15:17.343 fused_ordering(432) 00:15:17.343 fused_ordering(433) 00:15:17.343 fused_ordering(434) 00:15:17.343 fused_ordering(435) 00:15:17.343 fused_ordering(436) 00:15:17.343 fused_ordering(437) 00:15:17.343 fused_ordering(438) 00:15:17.343 fused_ordering(439) 00:15:17.343 fused_ordering(440) 00:15:17.343 fused_ordering(441) 00:15:17.343 fused_ordering(442) 00:15:17.343 fused_ordering(443) 00:15:17.343 fused_ordering(444) 00:15:17.343 fused_ordering(445) 00:15:17.343 fused_ordering(446) 00:15:17.343 fused_ordering(447) 00:15:17.343 fused_ordering(448) 00:15:17.343 fused_ordering(449) 00:15:17.343 fused_ordering(450) 00:15:17.343 fused_ordering(451) 00:15:17.343 fused_ordering(452) 00:15:17.343 fused_ordering(453) 00:15:17.343 fused_ordering(454) 00:15:17.343 fused_ordering(455) 00:15:17.343 fused_ordering(456) 00:15:17.343 fused_ordering(457) 00:15:17.343 fused_ordering(458) 00:15:17.343 fused_ordering(459) 00:15:17.343 fused_ordering(460) 00:15:17.343 fused_ordering(461) 00:15:17.343 fused_ordering(462) 00:15:17.343 fused_ordering(463) 00:15:17.343 fused_ordering(464) 00:15:17.343 fused_ordering(465) 00:15:17.343 fused_ordering(466) 00:15:17.343 fused_ordering(467) 00:15:17.343 fused_ordering(468) 00:15:17.343 fused_ordering(469) 00:15:17.343 fused_ordering(470) 00:15:17.343 fused_ordering(471) 00:15:17.343 fused_ordering(472) 00:15:17.343 fused_ordering(473) 00:15:17.343 fused_ordering(474) 00:15:17.343 fused_ordering(475) 00:15:17.343 fused_ordering(476) 00:15:17.343 fused_ordering(477) 00:15:17.343 fused_ordering(478) 00:15:17.343 fused_ordering(479) 00:15:17.343 fused_ordering(480) 00:15:17.343 fused_ordering(481) 00:15:17.343 fused_ordering(482) 00:15:17.343 fused_ordering(483) 00:15:17.343 fused_ordering(484) 00:15:17.343 fused_ordering(485) 00:15:17.343 fused_ordering(486) 00:15:17.343 fused_ordering(487) 00:15:17.343 fused_ordering(488) 00:15:17.343 fused_ordering(489) 00:15:17.343 fused_ordering(490) 00:15:17.343 fused_ordering(491) 00:15:17.343 fused_ordering(492) 00:15:17.343 fused_ordering(493) 00:15:17.343 fused_ordering(494) 00:15:17.343 fused_ordering(495) 00:15:17.343 fused_ordering(496) 00:15:17.343 fused_ordering(497) 00:15:17.343 fused_ordering(498) 00:15:17.343 fused_ordering(499) 00:15:17.343 fused_ordering(500) 00:15:17.343 fused_ordering(501) 00:15:17.343 fused_ordering(502) 00:15:17.343 fused_ordering(503) 00:15:17.343 fused_ordering(504) 00:15:17.343 fused_ordering(505) 00:15:17.343 fused_ordering(506) 00:15:17.343 fused_ordering(507) 00:15:17.343 fused_ordering(508) 00:15:17.343 fused_ordering(509) 00:15:17.343 fused_ordering(510) 00:15:17.343 fused_ordering(511) 00:15:17.343 fused_ordering(512) 00:15:17.343 fused_ordering(513) 00:15:17.343 fused_ordering(514) 00:15:17.343 fused_ordering(515) 00:15:17.343 fused_ordering(516) 00:15:17.343 fused_ordering(517) 00:15:17.343 fused_ordering(518) 00:15:17.343 fused_ordering(519) 00:15:17.343 fused_ordering(520) 00:15:17.343 fused_ordering(521) 00:15:17.343 fused_ordering(522) 00:15:17.343 fused_ordering(523) 00:15:17.343 fused_ordering(524) 00:15:17.343 fused_ordering(525) 00:15:17.343 fused_ordering(526) 00:15:17.343 fused_ordering(527) 00:15:17.343 fused_ordering(528) 00:15:17.343 fused_ordering(529) 00:15:17.343 fused_ordering(530) 00:15:17.343 fused_ordering(531) 00:15:17.343 fused_ordering(532) 00:15:17.343 fused_ordering(533) 00:15:17.343 fused_ordering(534) 00:15:17.343 fused_ordering(535) 00:15:17.343 fused_ordering(536) 00:15:17.343 fused_ordering(537) 00:15:17.343 fused_ordering(538) 00:15:17.343 fused_ordering(539) 00:15:17.343 fused_ordering(540) 00:15:17.343 fused_ordering(541) 00:15:17.343 fused_ordering(542) 00:15:17.343 fused_ordering(543) 00:15:17.343 fused_ordering(544) 00:15:17.343 fused_ordering(545) 00:15:17.343 fused_ordering(546) 00:15:17.343 fused_ordering(547) 00:15:17.343 fused_ordering(548) 00:15:17.343 fused_ordering(549) 00:15:17.343 fused_ordering(550) 00:15:17.343 fused_ordering(551) 00:15:17.343 fused_ordering(552) 00:15:17.343 fused_ordering(553) 00:15:17.343 fused_ordering(554) 00:15:17.343 fused_ordering(555) 00:15:17.343 fused_ordering(556) 00:15:17.343 fused_ordering(557) 00:15:17.343 fused_ordering(558) 00:15:17.343 fused_ordering(559) 00:15:17.343 fused_ordering(560) 00:15:17.343 fused_ordering(561) 00:15:17.343 fused_ordering(562) 00:15:17.343 fused_ordering(563) 00:15:17.343 fused_ordering(564) 00:15:17.343 fused_ordering(565) 00:15:17.343 fused_ordering(566) 00:15:17.343 fused_ordering(567) 00:15:17.343 fused_ordering(568) 00:15:17.343 fused_ordering(569) 00:15:17.343 fused_ordering(570) 00:15:17.343 fused_ordering(571) 00:15:17.343 fused_ordering(572) 00:15:17.343 fused_ordering(573) 00:15:17.343 fused_ordering(574) 00:15:17.343 fused_ordering(575) 00:15:17.343 fused_ordering(576) 00:15:17.343 fused_ordering(577) 00:15:17.343 fused_ordering(578) 00:15:17.343 fused_ordering(579) 00:15:17.343 fused_ordering(580) 00:15:17.343 fused_ordering(581) 00:15:17.343 fused_ordering(582) 00:15:17.343 fused_ordering(583) 00:15:17.343 fused_ordering(584) 00:15:17.343 fused_ordering(585) 00:15:17.343 fused_ordering(586) 00:15:17.343 fused_ordering(587) 00:15:17.343 fused_ordering(588) 00:15:17.343 fused_ordering(589) 00:15:17.343 fused_ordering(590) 00:15:17.343 fused_ordering(591) 00:15:17.343 fused_ordering(592) 00:15:17.343 fused_ordering(593) 00:15:17.343 fused_ordering(594) 00:15:17.343 fused_ordering(595) 00:15:17.343 fused_ordering(596) 00:15:17.343 fused_ordering(597) 00:15:17.343 fused_ordering(598) 00:15:17.343 fused_ordering(599) 00:15:17.343 fused_ordering(600) 00:15:17.343 fused_ordering(601) 00:15:17.343 fused_ordering(602) 00:15:17.343 fused_ordering(603) 00:15:17.343 fused_ordering(604) 00:15:17.343 fused_ordering(605) 00:15:17.343 fused_ordering(606) 00:15:17.343 fused_ordering(607) 00:15:17.343 fused_ordering(608) 00:15:17.343 fused_ordering(609) 00:15:17.343 fused_ordering(610) 00:15:17.343 fused_ordering(611) 00:15:17.343 fused_ordering(612) 00:15:17.343 fused_ordering(613) 00:15:17.343 fused_ordering(614) 00:15:17.343 fused_ordering(615) 00:15:18.278 fused_ordering(616) 00:15:18.278 fused_ordering(617) 00:15:18.278 fused_ordering(618) 00:15:18.278 fused_ordering(619) 00:15:18.278 fused_ordering(620) 00:15:18.278 fused_ordering(621) 00:15:18.278 fused_ordering(622) 00:15:18.278 fused_ordering(623) 00:15:18.278 fused_ordering(624) 00:15:18.278 fused_ordering(625) 00:15:18.278 fused_ordering(626) 00:15:18.278 fused_ordering(627) 00:15:18.278 fused_ordering(628) 00:15:18.278 fused_ordering(629) 00:15:18.278 fused_ordering(630) 00:15:18.278 fused_ordering(631) 00:15:18.278 fused_ordering(632) 00:15:18.278 fused_ordering(633) 00:15:18.278 fused_ordering(634) 00:15:18.278 fused_ordering(635) 00:15:18.278 fused_ordering(636) 00:15:18.278 fused_ordering(637) 00:15:18.278 fused_ordering(638) 00:15:18.278 fused_ordering(639) 00:15:18.278 fused_ordering(640) 00:15:18.278 fused_ordering(641) 00:15:18.278 fused_ordering(642) 00:15:18.278 fused_ordering(643) 00:15:18.278 fused_ordering(644) 00:15:18.278 fused_ordering(645) 00:15:18.278 fused_ordering(646) 00:15:18.278 fused_ordering(647) 00:15:18.278 fused_ordering(648) 00:15:18.278 fused_ordering(649) 00:15:18.278 fused_ordering(650) 00:15:18.278 fused_ordering(651) 00:15:18.278 fused_ordering(652) 00:15:18.278 fused_ordering(653) 00:15:18.278 fused_ordering(654) 00:15:18.278 fused_ordering(655) 00:15:18.278 fused_ordering(656) 00:15:18.278 fused_ordering(657) 00:15:18.278 fused_ordering(658) 00:15:18.278 fused_ordering(659) 00:15:18.278 fused_ordering(660) 00:15:18.278 fused_ordering(661) 00:15:18.278 fused_ordering(662) 00:15:18.278 fused_ordering(663) 00:15:18.278 fused_ordering(664) 00:15:18.278 fused_ordering(665) 00:15:18.278 fused_ordering(666) 00:15:18.278 fused_ordering(667) 00:15:18.278 fused_ordering(668) 00:15:18.278 fused_ordering(669) 00:15:18.278 fused_ordering(670) 00:15:18.278 fused_ordering(671) 00:15:18.278 fused_ordering(672) 00:15:18.278 fused_ordering(673) 00:15:18.278 fused_ordering(674) 00:15:18.278 fused_ordering(675) 00:15:18.278 fused_ordering(676) 00:15:18.278 fused_ordering(677) 00:15:18.278 fused_ordering(678) 00:15:18.278 fused_ordering(679) 00:15:18.278 fused_ordering(680) 00:15:18.278 fused_ordering(681) 00:15:18.278 fused_ordering(682) 00:15:18.278 fused_ordering(683) 00:15:18.278 fused_ordering(684) 00:15:18.278 fused_ordering(685) 00:15:18.278 fused_ordering(686) 00:15:18.278 fused_ordering(687) 00:15:18.278 fused_ordering(688) 00:15:18.278 fused_ordering(689) 00:15:18.278 fused_ordering(690) 00:15:18.278 fused_ordering(691) 00:15:18.278 fused_ordering(692) 00:15:18.278 fused_ordering(693) 00:15:18.278 fused_ordering(694) 00:15:18.278 fused_ordering(695) 00:15:18.278 fused_ordering(696) 00:15:18.278 fused_ordering(697) 00:15:18.278 fused_ordering(698) 00:15:18.278 fused_ordering(699) 00:15:18.278 fused_ordering(700) 00:15:18.278 fused_ordering(701) 00:15:18.278 fused_ordering(702) 00:15:18.278 fused_ordering(703) 00:15:18.278 fused_ordering(704) 00:15:18.278 fused_ordering(705) 00:15:18.278 fused_ordering(706) 00:15:18.278 fused_ordering(707) 00:15:18.278 fused_ordering(708) 00:15:18.278 fused_ordering(709) 00:15:18.278 fused_ordering(710) 00:15:18.278 fused_ordering(711) 00:15:18.278 fused_ordering(712) 00:15:18.278 fused_ordering(713) 00:15:18.278 fused_ordering(714) 00:15:18.278 fused_ordering(715) 00:15:18.278 fused_ordering(716) 00:15:18.278 fused_ordering(717) 00:15:18.278 fused_ordering(718) 00:15:18.278 fused_ordering(719) 00:15:18.278 fused_ordering(720) 00:15:18.278 fused_ordering(721) 00:15:18.278 fused_ordering(722) 00:15:18.278 fused_ordering(723) 00:15:18.278 fused_ordering(724) 00:15:18.278 fused_ordering(725) 00:15:18.278 fused_ordering(726) 00:15:18.278 fused_ordering(727) 00:15:18.278 fused_ordering(728) 00:15:18.278 fused_ordering(729) 00:15:18.278 fused_ordering(730) 00:15:18.278 fused_ordering(731) 00:15:18.278 fused_ordering(732) 00:15:18.278 fused_ordering(733) 00:15:18.278 fused_ordering(734) 00:15:18.278 fused_ordering(735) 00:15:18.278 fused_ordering(736) 00:15:18.278 fused_ordering(737) 00:15:18.278 fused_ordering(738) 00:15:18.278 fused_ordering(739) 00:15:18.278 fused_ordering(740) 00:15:18.278 fused_ordering(741) 00:15:18.278 fused_ordering(742) 00:15:18.278 fused_ordering(743) 00:15:18.278 fused_ordering(744) 00:15:18.278 fused_ordering(745) 00:15:18.278 fused_ordering(746) 00:15:18.278 fused_ordering(747) 00:15:18.278 fused_ordering(748) 00:15:18.278 fused_ordering(749) 00:15:18.278 fused_ordering(750) 00:15:18.278 fused_ordering(751) 00:15:18.278 fused_ordering(752) 00:15:18.278 fused_ordering(753) 00:15:18.278 fused_ordering(754) 00:15:18.278 fused_ordering(755) 00:15:18.278 fused_ordering(756) 00:15:18.278 fused_ordering(757) 00:15:18.278 fused_ordering(758) 00:15:18.278 fused_ordering(759) 00:15:18.278 fused_ordering(760) 00:15:18.278 fused_ordering(761) 00:15:18.278 fused_ordering(762) 00:15:18.278 fused_ordering(763) 00:15:18.278 fused_ordering(764) 00:15:18.278 fused_ordering(765) 00:15:18.278 fused_ordering(766) 00:15:18.278 fused_ordering(767) 00:15:18.278 fused_ordering(768) 00:15:18.278 fused_ordering(769) 00:15:18.278 fused_ordering(770) 00:15:18.278 fused_ordering(771) 00:15:18.278 fused_ordering(772) 00:15:18.278 fused_ordering(773) 00:15:18.278 fused_ordering(774) 00:15:18.278 fused_ordering(775) 00:15:18.278 fused_ordering(776) 00:15:18.278 fused_ordering(777) 00:15:18.278 fused_ordering(778) 00:15:18.278 fused_ordering(779) 00:15:18.278 fused_ordering(780) 00:15:18.278 fused_ordering(781) 00:15:18.278 fused_ordering(782) 00:15:18.278 fused_ordering(783) 00:15:18.278 fused_ordering(784) 00:15:18.278 fused_ordering(785) 00:15:18.278 fused_ordering(786) 00:15:18.278 fused_ordering(787) 00:15:18.278 fused_ordering(788) 00:15:18.278 fused_ordering(789) 00:15:18.278 fused_ordering(790) 00:15:18.278 fused_ordering(791) 00:15:18.278 fused_ordering(792) 00:15:18.278 fused_ordering(793) 00:15:18.278 fused_ordering(794) 00:15:18.278 fused_ordering(795) 00:15:18.278 fused_ordering(796) 00:15:18.278 fused_ordering(797) 00:15:18.278 fused_ordering(798) 00:15:18.278 fused_ordering(799) 00:15:18.278 fused_ordering(800) 00:15:18.278 fused_ordering(801) 00:15:18.278 fused_ordering(802) 00:15:18.278 fused_ordering(803) 00:15:18.278 fused_ordering(804) 00:15:18.278 fused_ordering(805) 00:15:18.278 fused_ordering(806) 00:15:18.278 fused_ordering(807) 00:15:18.278 fused_ordering(808) 00:15:18.278 fused_ordering(809) 00:15:18.278 fused_ordering(810) 00:15:18.278 fused_ordering(811) 00:15:18.278 fused_ordering(812) 00:15:18.278 fused_ordering(813) 00:15:18.278 fused_ordering(814) 00:15:18.278 fused_ordering(815) 00:15:18.278 fused_ordering(816) 00:15:18.278 fused_ordering(817) 00:15:18.278 fused_ordering(818) 00:15:18.278 fused_ordering(819) 00:15:18.278 fused_ordering(820) 00:15:19.212 fused_ordering(821) 00:15:19.212 fused_ordering(822) 00:15:19.212 fused_ordering(823) 00:15:19.212 fused_ordering(824) 00:15:19.212 fused_ordering(825) 00:15:19.212 fused_ordering(826) 00:15:19.212 fused_ordering(827) 00:15:19.212 fused_ordering(828) 00:15:19.212 fused_ordering(829) 00:15:19.212 fused_ordering(830) 00:15:19.212 fused_ordering(831) 00:15:19.212 fused_ordering(832) 00:15:19.212 fused_ordering(833) 00:15:19.212 fused_ordering(834) 00:15:19.212 fused_ordering(835) 00:15:19.212 fused_ordering(836) 00:15:19.212 fused_ordering(837) 00:15:19.212 fused_ordering(838) 00:15:19.212 fused_ordering(839) 00:15:19.212 fused_ordering(840) 00:15:19.212 fused_ordering(841) 00:15:19.212 fused_ordering(842) 00:15:19.212 fused_ordering(843) 00:15:19.212 fused_ordering(844) 00:15:19.212 fused_ordering(845) 00:15:19.212 fused_ordering(846) 00:15:19.212 fused_ordering(847) 00:15:19.212 fused_ordering(848) 00:15:19.212 fused_ordering(849) 00:15:19.212 fused_ordering(850) 00:15:19.213 fused_ordering(851) 00:15:19.213 fused_ordering(852) 00:15:19.213 fused_ordering(853) 00:15:19.213 fused_ordering(854) 00:15:19.213 fused_ordering(855) 00:15:19.213 fused_ordering(856) 00:15:19.213 fused_ordering(857) 00:15:19.213 fused_ordering(858) 00:15:19.213 fused_ordering(859) 00:15:19.213 fused_ordering(860) 00:15:19.213 fused_ordering(861) 00:15:19.213 fused_ordering(862) 00:15:19.213 fused_ordering(863) 00:15:19.213 fused_ordering(864) 00:15:19.213 fused_ordering(865) 00:15:19.213 fused_ordering(866) 00:15:19.213 fused_ordering(867) 00:15:19.213 fused_ordering(868) 00:15:19.213 fused_ordering(869) 00:15:19.213 fused_ordering(870) 00:15:19.213 fused_ordering(871) 00:15:19.213 fused_ordering(872) 00:15:19.213 fused_ordering(873) 00:15:19.213 fused_ordering(874) 00:15:19.213 fused_ordering(875) 00:15:19.213 fused_ordering(876) 00:15:19.213 fused_ordering(877) 00:15:19.213 fused_ordering(878) 00:15:19.213 fused_ordering(879) 00:15:19.213 fused_ordering(880) 00:15:19.213 fused_ordering(881) 00:15:19.213 fused_ordering(882) 00:15:19.213 fused_ordering(883) 00:15:19.213 fused_ordering(884) 00:15:19.213 fused_ordering(885) 00:15:19.213 fused_ordering(886) 00:15:19.213 fused_ordering(887) 00:15:19.213 fused_ordering(888) 00:15:19.213 fused_ordering(889) 00:15:19.213 fused_ordering(890) 00:15:19.213 fused_ordering(891) 00:15:19.213 fused_ordering(892) 00:15:19.213 fused_ordering(893) 00:15:19.213 fused_ordering(894) 00:15:19.213 fused_ordering(895) 00:15:19.213 fused_ordering(896) 00:15:19.213 fused_ordering(897) 00:15:19.213 fused_ordering(898) 00:15:19.213 fused_ordering(899) 00:15:19.213 fused_ordering(900) 00:15:19.213 fused_ordering(901) 00:15:19.213 fused_ordering(902) 00:15:19.213 fused_ordering(903) 00:15:19.213 fused_ordering(904) 00:15:19.213 fused_ordering(905) 00:15:19.213 fused_ordering(906) 00:15:19.213 fused_ordering(907) 00:15:19.213 fused_ordering(908) 00:15:19.213 fused_ordering(909) 00:15:19.213 fused_ordering(910) 00:15:19.213 fused_ordering(911) 00:15:19.213 fused_ordering(912) 00:15:19.213 fused_ordering(913) 00:15:19.213 fused_ordering(914) 00:15:19.213 fused_ordering(915) 00:15:19.213 fused_ordering(916) 00:15:19.213 fused_ordering(917) 00:15:19.213 fused_ordering(918) 00:15:19.213 fused_ordering(919) 00:15:19.213 fused_ordering(920) 00:15:19.213 fused_ordering(921) 00:15:19.213 fused_ordering(922) 00:15:19.213 fused_ordering(923) 00:15:19.213 fused_ordering(924) 00:15:19.213 fused_ordering(925) 00:15:19.213 fused_ordering(926) 00:15:19.213 fused_ordering(927) 00:15:19.213 fused_ordering(928) 00:15:19.213 fused_ordering(929) 00:15:19.213 fused_ordering(930) 00:15:19.213 fused_ordering(931) 00:15:19.213 fused_ordering(932) 00:15:19.213 fused_ordering(933) 00:15:19.213 fused_ordering(934) 00:15:19.213 fused_ordering(935) 00:15:19.213 fused_ordering(936) 00:15:19.213 fused_ordering(937) 00:15:19.213 fused_ordering(938) 00:15:19.213 fused_ordering(939) 00:15:19.213 fused_ordering(940) 00:15:19.213 fused_ordering(941) 00:15:19.213 fused_ordering(942) 00:15:19.213 fused_ordering(943) 00:15:19.213 fused_ordering(944) 00:15:19.213 fused_ordering(945) 00:15:19.213 fused_ordering(946) 00:15:19.213 fused_ordering(947) 00:15:19.213 fused_ordering(948) 00:15:19.213 fused_ordering(949) 00:15:19.213 fused_ordering(950) 00:15:19.213 fused_ordering(951) 00:15:19.213 fused_ordering(952) 00:15:19.213 fused_ordering(953) 00:15:19.213 fused_ordering(954) 00:15:19.213 fused_ordering(955) 00:15:19.213 fused_ordering(956) 00:15:19.213 fused_ordering(957) 00:15:19.213 fused_ordering(958) 00:15:19.213 fused_ordering(959) 00:15:19.213 fused_ordering(960) 00:15:19.213 fused_ordering(961) 00:15:19.213 fused_ordering(962) 00:15:19.213 fused_ordering(963) 00:15:19.213 fused_ordering(964) 00:15:19.213 fused_ordering(965) 00:15:19.213 fused_ordering(966) 00:15:19.213 fused_ordering(967) 00:15:19.213 fused_ordering(968) 00:15:19.213 fused_ordering(969) 00:15:19.213 fused_ordering(970) 00:15:19.213 fused_ordering(971) 00:15:19.213 fused_ordering(972) 00:15:19.213 fused_ordering(973) 00:15:19.213 fused_ordering(974) 00:15:19.213 fused_ordering(975) 00:15:19.213 fused_ordering(976) 00:15:19.213 fused_ordering(977) 00:15:19.213 fused_ordering(978) 00:15:19.213 fused_ordering(979) 00:15:19.213 fused_ordering(980) 00:15:19.213 fused_ordering(981) 00:15:19.213 fused_ordering(982) 00:15:19.213 fused_ordering(983) 00:15:19.213 fused_ordering(984) 00:15:19.213 fused_ordering(985) 00:15:19.213 fused_ordering(986) 00:15:19.213 fused_ordering(987) 00:15:19.213 fused_ordering(988) 00:15:19.213 fused_ordering(989) 00:15:19.213 fused_ordering(990) 00:15:19.213 fused_ordering(991) 00:15:19.213 fused_ordering(992) 00:15:19.213 fused_ordering(993) 00:15:19.213 fused_ordering(994) 00:15:19.213 fused_ordering(995) 00:15:19.213 fused_ordering(996) 00:15:19.213 fused_ordering(997) 00:15:19.213 fused_ordering(998) 00:15:19.213 fused_ordering(999) 00:15:19.213 fused_ordering(1000) 00:15:19.213 fused_ordering(1001) 00:15:19.213 fused_ordering(1002) 00:15:19.213 fused_ordering(1003) 00:15:19.213 fused_ordering(1004) 00:15:19.213 fused_ordering(1005) 00:15:19.213 fused_ordering(1006) 00:15:19.213 fused_ordering(1007) 00:15:19.213 fused_ordering(1008) 00:15:19.213 fused_ordering(1009) 00:15:19.213 fused_ordering(1010) 00:15:19.213 fused_ordering(1011) 00:15:19.213 fused_ordering(1012) 00:15:19.213 fused_ordering(1013) 00:15:19.213 fused_ordering(1014) 00:15:19.213 fused_ordering(1015) 00:15:19.213 fused_ordering(1016) 00:15:19.213 fused_ordering(1017) 00:15:19.213 fused_ordering(1018) 00:15:19.213 fused_ordering(1019) 00:15:19.213 fused_ordering(1020) 00:15:19.213 fused_ordering(1021) 00:15:19.213 fused_ordering(1022) 00:15:19.213 fused_ordering(1023) 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:19.213 rmmod nvme_tcp 00:15:19.213 rmmod nvme_fabrics 00:15:19.213 rmmod nvme_keyring 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1338311 ']' 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1338311 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1338311 ']' 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1338311 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1338311 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1338311' 00:15:19.213 killing process with pid 1338311 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1338311 00:15:19.213 14:17:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1338311 00:15:20.587 14:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:20.587 14:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:20.587 14:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:20.587 14:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:20.587 14:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:20.587 14:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.587 14:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.587 14:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.489 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:22.489 00:15:22.489 real 0m10.130s 00:15:22.489 user 0m8.384s 00:15:22.489 sys 0m3.673s 00:15:22.489 14:17:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:22.489 14:17:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:22.489 ************************************ 00:15:22.489 END TEST nvmf_fused_ordering 00:15:22.489 ************************************ 00:15:22.489 14:17:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:22.489 14:17:31 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:22.489 14:17:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:22.489 14:17:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:22.489 14:17:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:22.489 ************************************ 00:15:22.489 START TEST nvmf_delete_subsystem 00:15:22.489 ************************************ 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:22.489 * Looking for test storage... 00:15:22.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:15:22.489 14:17:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:25.019 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:25.019 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:25.019 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:25.019 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:25.019 14:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:25.019 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:25.019 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:25.019 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:25.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:15:25.019 00:15:25.019 --- 10.0.0.2 ping statistics --- 00:15:25.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.019 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:25.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:15:25.020 00:15:25.020 --- 10.0.0.1 ping statistics --- 00:15:25.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.020 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1340919 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1340919 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1340919 ']' 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:25.020 14:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:25.020 [2024-07-10 14:17:34.175918] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:15:25.020 [2024-07-10 14:17:34.176052] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.020 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.020 [2024-07-10 14:17:34.311981] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:25.278 [2024-07-10 14:17:34.569383] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.278 [2024-07-10 14:17:34.569466] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.278 [2024-07-10 14:17:34.569501] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.278 [2024-07-10 14:17:34.569522] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.278 [2024-07-10 14:17:34.569545] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.278 [2024-07-10 14:17:34.569673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.278 [2024-07-10 14:17:34.569681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.844 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:25.844 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:15:25.844 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:25.844 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:25.844 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:25.844 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.844 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:25.844 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.844 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:25.844 [2024-07-10 14:17:35.163500] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.844 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.844 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:25.844 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.844 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:25.844 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.845 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.845 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.845 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:25.845 [2024-07-10 14:17:35.180559] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.845 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.845 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:25.845 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.845 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:25.845 NULL1 00:15:25.845 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.845 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:25.845 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.845 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:25.845 Delay0 00:15:25.845 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.845 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:25.845 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.845 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:25.845 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.845 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1341073 00:15:25.845 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:25.845 14:17:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:25.845 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.845 [2024-07-10 14:17:35.315592] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:27.746 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.746 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.746 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 starting I/O failed: -6 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 starting I/O failed: -6 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Write completed with error (sct=0, sc=8) 00:15:28.004 starting I/O failed: -6 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 starting I/O failed: -6 00:15:28.004 Write completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Write completed with error (sct=0, sc=8) 00:15:28.004 Write completed with error (sct=0, sc=8) 00:15:28.004 starting I/O failed: -6 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Write completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Write completed with error (sct=0, sc=8) 00:15:28.004 starting I/O failed: -6 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Write completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 starting I/O failed: -6 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 starting I/O failed: -6 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Write completed with error (sct=0, sc=8) 00:15:28.004 starting I/O failed: -6 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 starting I/O failed: -6 00:15:28.004 Write completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Write completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 starting I/O failed: -6 00:15:28.004 Write completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 [2024-07-10 14:17:37.381994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(5) to be set 00:15:28.004 Write completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 starting I/O failed: -6 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Write completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 starting I/O failed: -6 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 starting I/O failed: -6 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Write completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 starting I/O failed: -6 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Read completed with error (sct=0, sc=8) 00:15:28.004 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 starting I/O failed: -6 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 starting I/O failed: -6 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 starting I/O failed: -6 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 starting I/O failed: -6 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 starting I/O failed: -6 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 starting I/O failed: -6 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 starting I/O failed: -6 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 [2024-07-10 14:17:37.383277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016100 is same with the state(5) to be set 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Write completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 Read completed with error (sct=0, sc=8) 00:15:28.005 [2024-07-10 14:17:37.384121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(5) to be set 00:15:28.939 [2024-07-10 14:17:38.336207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015980 is same with the state(5) to be set 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Write completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Write completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Write completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Write completed with error (sct=0, sc=8) 00:15:28.939 Write completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 [2024-07-10 14:17:38.380038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(5) to be set 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Write completed with error (sct=0, sc=8) 00:15:28.939 Write completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Write completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Write completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Write completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 [2024-07-10 14:17:38.384782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(5) to be set 00:15:28.939 Write completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Write completed with error (sct=0, sc=8) 00:15:28.939 Write completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Write completed with error (sct=0, sc=8) 00:15:28.939 Write completed with error (sct=0, sc=8) 00:15:28.939 Write completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Write completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Write completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Write completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 [2024-07-10 14:17:38.385547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015e80 is same with the state(5) to be set 00:15:28.939 Write completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Write completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.939 Read completed with error (sct=0, sc=8) 00:15:28.940 Read completed with error (sct=0, sc=8) 00:15:28.940 Read completed with error (sct=0, sc=8) 00:15:28.940 Write completed with error (sct=0, sc=8) 00:15:28.940 Write completed with error (sct=0, sc=8) 00:15:28.940 Read completed with error (sct=0, sc=8) 00:15:28.940 Read completed with error (sct=0, sc=8) 00:15:28.940 Write completed with error (sct=0, sc=8) 00:15:28.940 Read completed with error (sct=0, sc=8) 00:15:28.940 Read completed with error (sct=0, sc=8) 00:15:28.940 Read completed with error (sct=0, sc=8) 00:15:28.940 Read completed with error (sct=0, sc=8) 00:15:28.940 Read completed with error (sct=0, sc=8) 00:15:28.940 Write completed with error (sct=0, sc=8) 00:15:28.940 Read completed with error (sct=0, sc=8) 00:15:28.940 Read completed with error (sct=0, sc=8) 00:15:28.940 Read completed with error (sct=0, sc=8) 00:15:28.940 [2024-07-10 14:17:38.387073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(5) to be set 00:15:28.940 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.940 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:15:28.940 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1341073 00:15:28.940 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:28.940 Initializing NVMe Controllers 00:15:28.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:28.940 Controller IO queue size 128, less than required. 00:15:28.940 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:28.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:28.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:28.940 Initialization complete. Launching workers. 00:15:28.940 ======================================================== 00:15:28.940 Latency(us) 00:15:28.940 Device Information : IOPS MiB/s Average min max 00:15:28.940 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.37 0.08 906564.92 936.63 1014394.04 00:15:28.940 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 170.83 0.08 894179.54 961.43 1016182.31 00:15:28.940 ======================================================== 00:15:28.940 Total : 337.21 0.16 900290.39 936.63 1016182.31 00:15:28.940 00:15:28.940 [2024-07-10 14:17:38.391951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015980 (9): Bad file descriptor 00:15:28.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1341073 00:15:29.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1341073) - No such process 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1341073 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1341073 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1341073 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:29.505 [2024-07-10 14:17:38.912063] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1341479 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1341479 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:29.505 14:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:29.763 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.763 [2024-07-10 14:17:39.021445] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:30.020 14:17:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:30.020 14:17:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1341479 00:15:30.020 14:17:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:30.585 14:17:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:30.585 14:17:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1341479 00:15:30.585 14:17:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:31.150 14:17:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:31.150 14:17:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1341479 00:15:31.150 14:17:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:31.714 14:17:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:31.714 14:17:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1341479 00:15:31.714 14:17:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:31.972 14:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:31.972 14:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1341479 00:15:31.972 14:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:32.538 14:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:32.538 14:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1341479 00:15:32.538 14:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:32.796 Initializing NVMe Controllers 00:15:32.796 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:32.796 Controller IO queue size 128, less than required. 00:15:32.796 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:32.796 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:32.796 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:32.796 Initialization complete. Launching workers. 00:15:32.796 ======================================================== 00:15:32.796 Latency(us) 00:15:32.796 Device Information : IOPS MiB/s Average min max 00:15:32.796 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005622.57 1000313.21 1044506.02 00:15:32.796 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006565.27 1000294.56 1041409.19 00:15:32.796 ======================================================== 00:15:32.796 Total : 256.00 0.12 1006093.92 1000294.56 1044506.02 00:15:32.796 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1341479 00:15:33.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1341479) - No such process 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1341479 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:33.055 rmmod nvme_tcp 00:15:33.055 rmmod nvme_fabrics 00:15:33.055 rmmod nvme_keyring 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1340919 ']' 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1340919 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1340919 ']' 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1340919 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1340919 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1340919' 00:15:33.055 killing process with pid 1340919 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1340919 00:15:33.055 14:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1340919 00:15:34.428 14:17:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:34.428 14:17:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:34.428 14:17:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:34.428 14:17:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.428 14:17:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:34.428 14:17:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.428 14:17:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.428 14:17:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.958 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:36.958 00:15:36.958 real 0m14.002s 00:15:36.958 user 0m30.466s 00:15:36.958 sys 0m3.177s 00:15:36.958 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:36.958 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:36.958 ************************************ 00:15:36.958 END TEST nvmf_delete_subsystem 00:15:36.958 ************************************ 00:15:36.958 14:17:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:36.958 14:17:45 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:36.958 14:17:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:36.959 14:17:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:36.959 14:17:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:36.959 ************************************ 00:15:36.959 START TEST nvmf_ns_masking 00:15:36.959 ************************************ 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:36.959 * Looking for test storage... 00:15:36.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7daf68ba-6970-43bb-b6a8-d8bc271fc734 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=2f345703-d36c-4c61-9101-5b93267d78a6 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=fb708bf3-cd2f-4ac0-8317-01542bbec2e0 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:15:36.959 14:17:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:38.860 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:38.860 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:38.860 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:38.860 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:38.860 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:38.861 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:38.861 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:38.861 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:38.861 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:38.861 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:38.861 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:38.861 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:38.861 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:38.861 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:38.861 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:38.861 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:38.861 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:38.861 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:38.861 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:38.861 14:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:38.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:15:38.861 00:15:38.861 --- 10.0.0.2 ping statistics --- 00:15:38.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.861 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:38.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:15:38.861 00:15:38.861 --- 10.0.0.1 ping statistics --- 00:15:38.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.861 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1343954 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1343954 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1343954 ']' 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.861 14:17:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:38.861 [2024-07-10 14:17:48.196590] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:15:38.861 [2024-07-10 14:17:48.196748] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.861 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.861 [2024-07-10 14:17:48.331039] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.171 [2024-07-10 14:17:48.586432] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.171 [2024-07-10 14:17:48.586511] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.171 [2024-07-10 14:17:48.586540] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.171 [2024-07-10 14:17:48.586564] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.171 [2024-07-10 14:17:48.586586] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.171 [2024-07-10 14:17:48.586634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.756 14:17:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:39.756 14:17:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:15:39.756 14:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:39.756 14:17:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:39.756 14:17:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:39.756 14:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.756 14:17:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:40.013 [2024-07-10 14:17:49.357727] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.013 14:17:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:40.013 14:17:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:40.013 14:17:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:40.271 Malloc1 00:15:40.271 14:17:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:40.836 Malloc2 00:15:40.836 14:17:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:41.093 14:17:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:41.351 14:17:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:41.608 [2024-07-10 14:17:50.854204] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:41.608 14:17:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:41.608 14:17:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fb708bf3-cd2f-4ac0-8317-01542bbec2e0 -a 10.0.0.2 -s 4420 -i 4 00:15:41.865 14:17:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:41.865 14:17:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:41.865 14:17:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:41.865 14:17:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:41.865 14:17:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:43.763 14:17:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:43.763 14:17:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:43.763 14:17:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:43.763 14:17:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:43.763 14:17:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:43.763 14:17:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:43.763 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:43.763 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:43.763 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:43.763 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:43.763 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:43.763 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:43.763 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:43.763 [ 0]:0x1 00:15:43.763 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:43.763 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.020 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=008d723ac3794fd4879adda497a2f2f5 00:15:44.020 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 008d723ac3794fd4879adda497a2f2f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.020 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:44.276 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:44.276 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:44.276 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:44.276 [ 0]:0x1 00:15:44.276 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:44.276 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.276 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=008d723ac3794fd4879adda497a2f2f5 00:15:44.276 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 008d723ac3794fd4879adda497a2f2f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.276 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:44.276 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:44.276 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:44.276 [ 1]:0x2 00:15:44.276 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:44.276 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.276 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1da0bfc8d04b4ca78e388d14e3cc2dfc 00:15:44.276 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1da0bfc8d04b4ca78e388d14e3cc2dfc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.276 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:44.276 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:44.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.276 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:44.532 14:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:44.789 14:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:44.789 14:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fb708bf3-cd2f-4ac0-8317-01542bbec2e0 -a 10.0.0.2 -s 4420 -i 4 00:15:45.046 14:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:45.046 14:17:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:45.046 14:17:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:45.046 14:17:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:15:45.046 14:17:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:15:45.046 14:17:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:46.943 14:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:47.201 [ 0]:0x2 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1da0bfc8d04b4ca78e388d14e3cc2dfc 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1da0bfc8d04b4ca78e388d14e3cc2dfc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.201 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:47.766 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:47.766 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.766 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:47.766 [ 0]:0x1 00:15:47.766 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:47.766 14:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.766 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=008d723ac3794fd4879adda497a2f2f5 00:15:47.766 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 008d723ac3794fd4879adda497a2f2f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.766 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:47.766 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.767 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:47.767 [ 1]:0x2 00:15:47.767 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:47.767 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.767 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1da0bfc8d04b4ca78e388d14e3cc2dfc 00:15:47.767 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1da0bfc8d04b4ca78e388d14e3cc2dfc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.767 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:48.024 [ 0]:0x2 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1da0bfc8d04b4ca78e388d14e3cc2dfc 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1da0bfc8d04b4ca78e388d14e3cc2dfc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:48.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.024 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:48.281 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:48.281 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fb708bf3-cd2f-4ac0-8317-01542bbec2e0 -a 10.0.0.2 -s 4420 -i 4 00:15:48.538 14:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:48.538 14:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:48.538 14:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:48.538 14:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:48.538 14:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:48.538 14:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:50.435 14:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:50.435 14:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:50.435 14:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:50.435 14:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:50.435 14:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:50.435 14:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:50.435 14:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:50.435 14:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:50.693 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:50.693 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:50.693 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:50.693 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:50.693 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:50.693 [ 0]:0x1 00:15:50.693 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:50.693 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:50.693 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=008d723ac3794fd4879adda497a2f2f5 00:15:50.693 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 008d723ac3794fd4879adda497a2f2f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:50.693 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:50.693 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:50.693 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:50.693 [ 1]:0x2 00:15:50.693 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:50.693 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:50.953 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1da0bfc8d04b4ca78e388d14e3cc2dfc 00:15:50.954 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1da0bfc8d04b4ca78e388d14e3cc2dfc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:50.954 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:50.954 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:50.954 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:50.954 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:50.954 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:50.954 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:50.954 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:50.954 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:50.954 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:50.954 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:50.954 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:51.211 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:51.211 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:51.212 [ 0]:0x2 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1da0bfc8d04b4ca78e388d14e3cc2dfc 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1da0bfc8d04b4ca78e388d14e3cc2dfc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:51.212 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:51.470 [2024-07-10 14:18:00.755556] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:51.470 request: 00:15:51.470 { 00:15:51.470 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.470 "nsid": 2, 00:15:51.470 "host": "nqn.2016-06.io.spdk:host1", 00:15:51.470 "method": "nvmf_ns_remove_host", 00:15:51.470 "req_id": 1 00:15:51.470 } 00:15:51.470 Got JSON-RPC error response 00:15:51.470 response: 00:15:51.470 { 00:15:51.470 "code": -32602, 00:15:51.470 "message": "Invalid parameters" 00:15:51.470 } 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:51.470 [ 0]:0x2 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1da0bfc8d04b4ca78e388d14e3cc2dfc 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1da0bfc8d04b4ca78e388d14e3cc2dfc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:51.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1345747 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1345747 /var/tmp/host.sock 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1345747 ']' 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:51.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:51.470 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:51.471 14:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:51.729 [2024-07-10 14:18:01.010978] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:15:51.729 [2024-07-10 14:18:01.011133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1345747 ] 00:15:51.729 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.729 [2024-07-10 14:18:01.135866] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.986 [2024-07-10 14:18:01.389251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.922 14:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:52.922 14:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:15:52.922 14:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:53.180 14:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:53.438 14:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7daf68ba-6970-43bb-b6a8-d8bc271fc734 00:15:53.438 14:18:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:53.438 14:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7DAF68BA697043BBB6A8D8BC271FC734 -i 00:15:53.697 14:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 2f345703-d36c-4c61-9101-5b93267d78a6 00:15:53.697 14:18:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:53.697 14:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 2F345703D36C4C6191015B93267D78A6 -i 00:15:53.954 14:18:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:54.211 14:18:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:54.468 14:18:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:54.468 14:18:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:55.034 nvme0n1 00:15:55.034 14:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:55.034 14:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:55.292 nvme1n2 00:15:55.292 14:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:55.292 14:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:55.292 14:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:55.292 14:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:55.292 14:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:55.550 14:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:55.550 14:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:55.550 14:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:55.550 14:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:55.808 14:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7daf68ba-6970-43bb-b6a8-d8bc271fc734 == \7\d\a\f\6\8\b\a\-\6\9\7\0\-\4\3\b\b\-\b\6\a\8\-\d\8\b\c\2\7\1\f\c\7\3\4 ]] 00:15:55.808 14:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:55.808 14:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:55.808 14:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:56.066 14:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 2f345703-d36c-4c61-9101-5b93267d78a6 == \2\f\3\4\5\7\0\3\-\d\3\6\c\-\4\c\6\1\-\9\1\0\1\-\5\b\9\3\2\6\7\d\7\8\a\6 ]] 00:15:56.066 14:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1345747 00:15:56.066 14:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1345747 ']' 00:15:56.066 14:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1345747 00:15:56.066 14:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:15:56.066 14:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:56.066 14:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1345747 00:15:56.066 14:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:56.066 14:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:56.066 14:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1345747' 00:15:56.066 killing process with pid 1345747 00:15:56.066 14:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1345747 00:15:56.066 14:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1345747 00:15:58.595 14:18:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:58.595 14:18:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:15:58.595 14:18:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:15:58.595 14:18:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:58.595 14:18:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:58.595 14:18:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:58.595 14:18:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:58.595 14:18:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:58.595 14:18:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:58.595 rmmod nvme_tcp 00:15:58.595 rmmod nvme_fabrics 00:15:58.595 rmmod nvme_keyring 00:15:58.595 14:18:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:58.595 14:18:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:58.595 14:18:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:58.595 14:18:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1343954 ']' 00:15:58.595 14:18:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1343954 00:15:58.595 14:18:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1343954 ']' 00:15:58.595 14:18:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1343954 00:15:58.595 14:18:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:15:58.595 14:18:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:58.595 14:18:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1343954 00:15:58.595 14:18:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:58.595 14:18:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:58.595 14:18:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1343954' 00:15:58.595 killing process with pid 1343954 00:15:58.595 14:18:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1343954 00:15:58.595 14:18:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1343954 00:16:00.497 14:18:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:00.497 14:18:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:00.497 14:18:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:00.497 14:18:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:00.497 14:18:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:00.497 14:18:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.497 14:18:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:00.497 14:18:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.403 14:18:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:02.403 00:16:02.403 real 0m25.793s 00:16:02.403 user 0m34.885s 00:16:02.403 sys 0m4.469s 00:16:02.403 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:02.403 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:02.403 ************************************ 00:16:02.403 END TEST nvmf_ns_masking 00:16:02.403 ************************************ 00:16:02.403 14:18:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:02.403 14:18:11 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:16:02.403 14:18:11 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:02.403 14:18:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:02.403 14:18:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:02.403 14:18:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:02.403 ************************************ 00:16:02.403 START TEST nvmf_nvme_cli 00:16:02.403 ************************************ 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:02.403 * Looking for test storage... 00:16:02.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:02.403 14:18:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:02.404 14:18:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:02.404 14:18:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:02.404 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:02.404 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.404 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:02.404 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:02.404 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:02.404 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.404 14:18:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.404 14:18:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.404 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:02.404 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:02.404 14:18:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:02.404 14:18:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:04.934 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:04.935 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:04.935 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:04.935 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:04.935 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:04.935 14:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:04.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:16:04.935 00:16:04.935 --- 10.0.0.2 ping statistics --- 00:16:04.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.935 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:04.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:04.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:16:04.935 00:16:04.935 --- 10.0.0.1 ping statistics --- 00:16:04.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.935 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1349220 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1349220 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1349220 ']' 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.935 14:18:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:04.935 [2024-07-10 14:18:14.175164] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:04.935 [2024-07-10 14:18:14.175312] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.935 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.935 [2024-07-10 14:18:14.340839] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:05.193 [2024-07-10 14:18:14.620776] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.193 [2024-07-10 14:18:14.620856] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.193 [2024-07-10 14:18:14.620885] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.193 [2024-07-10 14:18:14.620907] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.193 [2024-07-10 14:18:14.620929] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.193 [2024-07-10 14:18:14.621063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.193 [2024-07-10 14:18:14.621130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.193 [2024-07-10 14:18:14.621186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.193 [2024-07-10 14:18:14.621196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:05.758 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:05.758 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:16:05.758 14:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:05.758 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:05.758 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:06.016 [2024-07-10 14:18:15.248972] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:06.016 Malloc0 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:06.016 Malloc1 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:06.016 [2024-07-10 14:18:15.440840] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.016 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:06.017 14:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.017 14:18:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:16:06.274 00:16:06.274 Discovery Log Number of Records 2, Generation counter 2 00:16:06.274 =====Discovery Log Entry 0====== 00:16:06.274 trtype: tcp 00:16:06.274 adrfam: ipv4 00:16:06.274 subtype: current discovery subsystem 00:16:06.274 treq: not required 00:16:06.274 portid: 0 00:16:06.274 trsvcid: 4420 00:16:06.274 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:06.274 traddr: 10.0.0.2 00:16:06.274 eflags: explicit discovery connections, duplicate discovery information 00:16:06.274 sectype: none 00:16:06.274 =====Discovery Log Entry 1====== 00:16:06.274 trtype: tcp 00:16:06.274 adrfam: ipv4 00:16:06.274 subtype: nvme subsystem 00:16:06.274 treq: not required 00:16:06.274 portid: 0 00:16:06.274 trsvcid: 4420 00:16:06.274 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:06.274 traddr: 10.0.0.2 00:16:06.274 eflags: none 00:16:06.274 sectype: none 00:16:06.274 14:18:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:06.274 14:18:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:06.274 14:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:06.274 14:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:06.274 14:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:06.274 14:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:06.274 14:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:06.274 14:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:06.274 14:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:06.274 14:18:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:06.274 14:18:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:06.839 14:18:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:06.839 14:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:16:06.839 14:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:06.839 14:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:06.839 14:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:06.839 14:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:08.734 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:08.734 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:08.734 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:08.734 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:08.734 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:08.734 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:08.734 14:18:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:08.734 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:08.734 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:08.734 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:08.992 /dev/nvme0n1 ]] 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:08.992 14:18:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:09.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:09.627 rmmod nvme_tcp 00:16:09.627 rmmod nvme_fabrics 00:16:09.627 rmmod nvme_keyring 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1349220 ']' 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1349220 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1349220 ']' 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1349220 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1349220 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1349220' 00:16:09.627 killing process with pid 1349220 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1349220 00:16:09.627 14:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1349220 00:16:11.018 14:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:11.018 14:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:11.018 14:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:11.018 14:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.018 14:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:11.018 14:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.018 14:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.018 14:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.554 14:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:13.554 00:16:13.554 real 0m10.790s 00:16:13.555 user 0m22.517s 00:16:13.555 sys 0m2.562s 00:16:13.555 14:18:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:13.555 14:18:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:13.555 ************************************ 00:16:13.555 END TEST nvmf_nvme_cli 00:16:13.555 ************************************ 00:16:13.555 14:18:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:13.555 14:18:22 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:16:13.555 14:18:22 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:13.555 14:18:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:13.555 14:18:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.555 14:18:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:13.555 ************************************ 00:16:13.555 START TEST nvmf_host_management 00:16:13.555 ************************************ 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:13.555 * Looking for test storage... 00:16:13.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:13.555 14:18:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:15.456 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:15.456 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:15.456 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:15.456 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:15.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:16:15.456 00:16:15.456 --- 10.0.0.2 ping statistics --- 00:16:15.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.456 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:15.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:16:15.456 00:16:15.456 --- 10.0.0.1 ping statistics --- 00:16:15.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.456 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1351928 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1351928 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1351928 ']' 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.456 14:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:15.457 14:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.457 14:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:15.457 14:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:15.457 [2024-07-10 14:18:24.810849] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:15.457 [2024-07-10 14:18:24.810998] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.457 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.715 [2024-07-10 14:18:24.950705] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:15.973 [2024-07-10 14:18:25.224453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.973 [2024-07-10 14:18:25.224533] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.973 [2024-07-10 14:18:25.224560] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.973 [2024-07-10 14:18:25.224580] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.973 [2024-07-10 14:18:25.224601] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.973 [2024-07-10 14:18:25.224732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.973 [2024-07-10 14:18:25.227465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:15.973 [2024-07-10 14:18:25.227540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.973 [2024-07-10 14:18:25.227548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:16.231 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:16.231 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:16.231 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:16.231 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:16.231 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:16.489 [2024-07-10 14:18:25.738626] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:16.489 Malloc0 00:16:16.489 [2024-07-10 14:18:25.855569] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1352130 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1352130 /var/tmp/bdevperf.sock 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1352130 ']' 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:16.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:16.489 { 00:16:16.489 "params": { 00:16:16.489 "name": "Nvme$subsystem", 00:16:16.489 "trtype": "$TEST_TRANSPORT", 00:16:16.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:16.489 "adrfam": "ipv4", 00:16:16.489 "trsvcid": "$NVMF_PORT", 00:16:16.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:16.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:16.489 "hdgst": ${hdgst:-false}, 00:16:16.489 "ddgst": ${ddgst:-false} 00:16:16.489 }, 00:16:16.489 "method": "bdev_nvme_attach_controller" 00:16:16.489 } 00:16:16.489 EOF 00:16:16.489 )") 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:16.489 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:16.489 "params": { 00:16:16.489 "name": "Nvme0", 00:16:16.489 "trtype": "tcp", 00:16:16.489 "traddr": "10.0.0.2", 00:16:16.489 "adrfam": "ipv4", 00:16:16.489 "trsvcid": "4420", 00:16:16.489 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:16.489 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:16.489 "hdgst": false, 00:16:16.489 "ddgst": false 00:16:16.489 }, 00:16:16.489 "method": "bdev_nvme_attach_controller" 00:16:16.489 }' 00:16:16.489 [2024-07-10 14:18:25.968695] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:16.489 [2024-07-10 14:18:25.968829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1352130 ] 00:16:16.747 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.747 [2024-07-10 14:18:26.093262] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.003 [2024-07-10 14:18:26.332868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.567 Running I/O for 10 seconds... 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:16:17.567 14:18:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:17.826 14:18:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:17.826 14:18:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:17.826 14:18:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:17.826 14:18:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:17.826 14:18:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.826 14:18:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:17.826 14:18:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.826 14:18:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:16:17.826 14:18:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:16:17.826 14:18:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:17.826 14:18:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:17.826 14:18:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:17.826 14:18:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:17.826 14:18:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.826 14:18:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:17.826 [2024-07-10 14:18:27.236333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.236998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 [2024-07-10 14:18:27.237689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:17.826 14:18:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.826 14:18:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:17.826 14:18:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.826 14:18:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:17.826 [2024-07-10 14:18:27.242235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.826 [2024-07-10 14:18:27.242295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.826 [2024-07-10 14:18:27.242350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.826 [2024-07-10 14:18:27.242377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.826 [2024-07-10 14:18:27.242402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.826 [2024-07-10 14:18:27.242431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.826 [2024-07-10 14:18:27.242458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.826 [2024-07-10 14:18:27.242495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.826 [2024-07-10 14:18:27.242521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.826 [2024-07-10 14:18:27.242543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.826 [2024-07-10 14:18:27.242566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.826 [2024-07-10 14:18:27.242587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.826 [2024-07-10 14:18:27.242609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.826 [2024-07-10 14:18:27.242630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.826 [2024-07-10 14:18:27.242654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.826 [2024-07-10 14:18:27.242675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.826 [2024-07-10 14:18:27.242716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.826 [2024-07-10 14:18:27.242745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.826 [2024-07-10 14:18:27.242769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.826 [2024-07-10 14:18:27.242790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.826 [2024-07-10 14:18:27.242814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.826 [2024-07-10 14:18:27.242835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.826 [2024-07-10 14:18:27.242859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.826 [2024-07-10 14:18:27.242880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.826 [2024-07-10 14:18:27.242903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.826 [2024-07-10 14:18:27.242924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.826 [2024-07-10 14:18:27.242947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.826 [2024-07-10 14:18:27.242968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.826 [2024-07-10 14:18:27.242991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.826 [2024-07-10 14:18:27.243013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.826 [2024-07-10 14:18:27.243036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.826 [2024-07-10 14:18:27.243057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.826 [2024-07-10 14:18:27.243080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.243106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.243130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.243152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.243175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.243196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.243219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.243240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.243264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.243285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.243308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.243329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.243352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.243373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.243396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.243417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.243448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.243471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.243499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.243520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.243543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.243564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.243588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.243609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.243633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.243654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.243681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.243703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.243737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.243758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.243780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.243810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.243833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.243854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.243877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.243898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.243922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.243942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.243966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.243986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.244977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.244998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.245021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.245042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.245065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.245085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.245108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.245129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.245153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.245173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.245196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.245217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.245240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.827 [2024-07-10 14:18:27.245261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.245600] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2c80 was disconnected and freed. reset controller. 00:16:17.827 [2024-07-10 14:18:27.245701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.827 [2024-07-10 14:18:27.245741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.245765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.827 [2024-07-10 14:18:27.245784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.245816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.827 [2024-07-10 14:18:27.245837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.245857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.827 [2024-07-10 14:18:27.245876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.827 [2024-07-10 14:18:27.245895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:16:17.827 [2024-07-10 14:18:27.247144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:17.827 14:18:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.827 14:18:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:17.827 task offset: 59264 on job bdev=Nvme0n1 fails 00:16:17.827 00:16:17.827 Latency(us) 00:16:17.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.827 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:17.827 Job: Nvme0n1 ended in about 0.45 seconds with error 00:16:17.827 Verification LBA range: start 0x0 length 0x400 00:16:17.827 Nvme0n1 : 0.45 1038.28 64.89 143.52 0.00 52692.76 4126.34 43496.49 00:16:17.827 =================================================================================================================== 00:16:17.827 Total : 1038.28 64.89 143.52 0.00 52692.76 4126.34 43496.49 00:16:17.827 [2024-07-10 14:18:27.252475] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:17.827 [2024-07-10 14:18:27.252535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:16:17.827 [2024-07-10 14:18:27.306097] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:19.201 14:18:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1352130 00:16:19.201 14:18:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:19.201 14:18:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:19.201 14:18:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:19.201 14:18:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:19.201 14:18:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:19.201 14:18:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:19.201 14:18:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:19.201 { 00:16:19.201 "params": { 00:16:19.201 "name": "Nvme$subsystem", 00:16:19.201 "trtype": "$TEST_TRANSPORT", 00:16:19.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:19.201 "adrfam": "ipv4", 00:16:19.201 "trsvcid": "$NVMF_PORT", 00:16:19.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:19.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:19.201 "hdgst": ${hdgst:-false}, 00:16:19.201 "ddgst": ${ddgst:-false} 00:16:19.201 }, 00:16:19.201 "method": "bdev_nvme_attach_controller" 00:16:19.201 } 00:16:19.201 EOF 00:16:19.201 )") 00:16:19.201 14:18:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:19.201 14:18:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:19.201 14:18:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:19.201 14:18:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:19.201 "params": { 00:16:19.201 "name": "Nvme0", 00:16:19.201 "trtype": "tcp", 00:16:19.201 "traddr": "10.0.0.2", 00:16:19.201 "adrfam": "ipv4", 00:16:19.201 "trsvcid": "4420", 00:16:19.201 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:19.201 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:19.201 "hdgst": false, 00:16:19.201 "ddgst": false 00:16:19.201 }, 00:16:19.201 "method": "bdev_nvme_attach_controller" 00:16:19.201 }' 00:16:19.201 [2024-07-10 14:18:28.331165] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:19.201 [2024-07-10 14:18:28.331311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1352439 ] 00:16:19.201 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.201 [2024-07-10 14:18:28.456497] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.459 [2024-07-10 14:18:28.694900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.024 Running I/O for 1 seconds... 00:16:20.955 00:16:20.955 Latency(us) 00:16:20.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.955 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:20.955 Verification LBA range: start 0x0 length 0x400 00:16:20.955 Nvme0n1 : 1.03 1299.68 81.23 0.00 0.00 48423.01 10145.94 41166.32 00:16:20.955 =================================================================================================================== 00:16:20.955 Total : 1299.68 81.23 0.00 0.00 48423.01 10145.94 41166.32 00:16:21.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 1352130 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:16:21.888 14:18:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:21.888 14:18:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:21.888 14:18:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:21.888 14:18:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:21.888 14:18:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:21.888 14:18:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:21.888 14:18:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:21.888 14:18:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:21.888 14:18:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:21.888 14:18:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:21.888 14:18:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:21.888 rmmod nvme_tcp 00:16:22.145 rmmod nvme_fabrics 00:16:22.145 rmmod nvme_keyring 00:16:22.145 14:18:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:22.145 14:18:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:22.145 14:18:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:22.145 14:18:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1351928 ']' 00:16:22.145 14:18:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1351928 00:16:22.145 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1351928 ']' 00:16:22.145 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1351928 00:16:22.145 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:16:22.145 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:22.145 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1351928 00:16:22.145 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:22.145 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:22.145 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1351928' 00:16:22.145 killing process with pid 1351928 00:16:22.145 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1351928 00:16:22.145 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1351928 00:16:23.515 [2024-07-10 14:18:32.748186] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:23.515 14:18:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:23.515 14:18:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:23.515 14:18:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:23.515 14:18:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:23.515 14:18:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:23.515 14:18:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.515 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.515 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.043 14:18:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:26.043 14:18:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:26.043 00:16:26.043 real 0m12.330s 00:16:26.043 user 0m34.365s 00:16:26.043 sys 0m3.094s 00:16:26.043 14:18:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:26.043 14:18:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:26.043 ************************************ 00:16:26.043 END TEST nvmf_host_management 00:16:26.043 ************************************ 00:16:26.043 14:18:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:26.043 14:18:34 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:26.043 14:18:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:26.043 14:18:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:26.043 14:18:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:26.043 ************************************ 00:16:26.043 START TEST nvmf_lvol 00:16:26.043 ************************************ 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:26.043 * Looking for test storage... 00:16:26.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:26.043 14:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:26.043 14:18:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:26.043 14:18:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:26.043 14:18:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:26.043 14:18:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:26.043 14:18:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:26.043 14:18:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:26.043 14:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:26.043 14:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.043 14:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:26.043 14:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:26.043 14:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:26.043 14:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.043 14:18:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.043 14:18:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.043 14:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:26.043 14:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:26.043 14:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:26.043 14:18:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:27.417 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:27.417 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:27.417 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.417 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:27.418 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:27.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:16:27.418 00:16:27.418 --- 10.0.0.2 ping statistics --- 00:16:27.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.418 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:27.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:16:27.418 00:16:27.418 --- 10.0.0.1 ping statistics --- 00:16:27.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.418 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:27.418 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:27.676 14:18:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:27.676 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:27.676 14:18:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:27.676 14:18:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:27.676 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1354776 00:16:27.676 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:27.676 14:18:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1354776 00:16:27.676 14:18:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1354776 ']' 00:16:27.676 14:18:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.676 14:18:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:27.676 14:18:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.676 14:18:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:27.676 14:18:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:27.676 [2024-07-10 14:18:36.994071] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:27.676 [2024-07-10 14:18:36.994223] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.676 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.935 [2024-07-10 14:18:37.163104] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:27.935 [2024-07-10 14:18:37.392262] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.935 [2024-07-10 14:18:37.392331] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.935 [2024-07-10 14:18:37.392377] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.935 [2024-07-10 14:18:37.392395] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.935 [2024-07-10 14:18:37.392413] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.935 [2024-07-10 14:18:37.392534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.935 [2024-07-10 14:18:37.392572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.935 [2024-07-10 14:18:37.392583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.501 14:18:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.501 14:18:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:16:28.501 14:18:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:28.501 14:18:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:28.501 14:18:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:28.501 14:18:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.501 14:18:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:29.067 [2024-07-10 14:18:38.251555] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.067 14:18:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:29.325 14:18:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:29.325 14:18:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:29.584 14:18:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:29.584 14:18:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:29.842 14:18:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:30.409 14:18:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f92d6d0d-bb51-496c-9553-be8308ca20ab 00:16:30.409 14:18:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f92d6d0d-bb51-496c-9553-be8308ca20ab lvol 20 00:16:30.409 14:18:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ff41410b-82f9-4880-9ce7-116619cca530 00:16:30.409 14:18:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:30.667 14:18:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ff41410b-82f9-4880-9ce7-116619cca530 00:16:30.925 14:18:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:31.183 [2024-07-10 14:18:40.658568] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.440 14:18:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:31.698 14:18:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1355326 00:16:31.698 14:18:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:31.698 14:18:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:31.698 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.631 14:18:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ff41410b-82f9-4880-9ce7-116619cca530 MY_SNAPSHOT 00:16:32.888 14:18:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e7b289ab-3b27-4acc-a8e6-fab2ab1638b5 00:16:32.888 14:18:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ff41410b-82f9-4880-9ce7-116619cca530 30 00:16:33.146 14:18:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e7b289ab-3b27-4acc-a8e6-fab2ab1638b5 MY_CLONE 00:16:33.711 14:18:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ceaf78a6-a857-4527-997a-4506953a73a9 00:16:33.711 14:18:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ceaf78a6-a857-4527-997a-4506953a73a9 00:16:34.645 14:18:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1355326 00:16:42.753 Initializing NVMe Controllers 00:16:42.753 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:42.753 Controller IO queue size 128, less than required. 00:16:42.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:42.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:42.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:42.753 Initialization complete. Launching workers. 00:16:42.753 ======================================================== 00:16:42.753 Latency(us) 00:16:42.753 Device Information : IOPS MiB/s Average min max 00:16:42.753 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8388.70 32.77 15265.80 521.20 174902.43 00:16:42.753 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8226.60 32.14 15565.67 3371.02 188877.43 00:16:42.753 ======================================================== 00:16:42.753 Total : 16615.30 64.90 15414.27 521.20 188877.43 00:16:42.753 00:16:42.753 14:18:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:42.753 14:18:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ff41410b-82f9-4880-9ce7-116619cca530 00:16:42.753 14:18:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f92d6d0d-bb51-496c-9553-be8308ca20ab 00:16:42.753 14:18:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:42.753 14:18:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:42.753 14:18:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:42.753 14:18:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:42.753 14:18:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:42.753 14:18:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:42.753 14:18:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:42.753 14:18:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:42.753 14:18:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:42.753 rmmod nvme_tcp 00:16:42.753 rmmod nvme_fabrics 00:16:42.753 rmmod nvme_keyring 00:16:43.011 14:18:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:43.011 14:18:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:43.011 14:18:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:43.011 14:18:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1354776 ']' 00:16:43.011 14:18:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1354776 00:16:43.011 14:18:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1354776 ']' 00:16:43.011 14:18:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1354776 00:16:43.011 14:18:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:16:43.011 14:18:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:43.011 14:18:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1354776 00:16:43.011 14:18:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:43.011 14:18:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:43.011 14:18:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1354776' 00:16:43.011 killing process with pid 1354776 00:16:43.011 14:18:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1354776 00:16:43.011 14:18:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1354776 00:16:44.415 14:18:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:44.415 14:18:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:44.415 14:18:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:44.415 14:18:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:44.415 14:18:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:44.415 14:18:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.415 14:18:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.415 14:18:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:46.966 00:16:46.966 real 0m20.934s 00:16:46.966 user 1m10.707s 00:16:46.966 sys 0m5.241s 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:46.966 ************************************ 00:16:46.966 END TEST nvmf_lvol 00:16:46.966 ************************************ 00:16:46.966 14:18:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:46.966 14:18:55 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:46.966 14:18:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:46.966 14:18:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:46.966 14:18:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:46.966 ************************************ 00:16:46.966 START TEST nvmf_lvs_grow 00:16:46.966 ************************************ 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:46.966 * Looking for test storage... 00:16:46.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.966 14:18:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.966 14:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:46.966 14:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:46.966 14:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:46.966 14:18:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:48.342 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:48.342 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:48.342 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:48.342 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:48.343 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:48.343 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:48.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:16:48.601 00:16:48.601 --- 10.0.0.2 ping statistics --- 00:16:48.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.601 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:48.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:16:48.601 00:16:48.601 --- 10.0.0.1 ping statistics --- 00:16:48.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.601 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1358722 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1358722 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1358722 ']' 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.601 14:18:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:48.601 [2024-07-10 14:18:58.014831] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:48.601 [2024-07-10 14:18:58.014963] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.860 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.860 [2024-07-10 14:18:58.178803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.117 [2024-07-10 14:18:58.421379] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.117 [2024-07-10 14:18:58.421484] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.117 [2024-07-10 14:18:58.421524] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.117 [2024-07-10 14:18:58.421545] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.117 [2024-07-10 14:18:58.421561] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.117 [2024-07-10 14:18:58.421606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.683 14:18:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:49.683 14:18:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:16:49.683 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:49.683 14:18:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:49.683 14:18:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:49.683 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.683 14:18:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:49.941 [2024-07-10 14:18:59.327322] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.941 14:18:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:49.941 14:18:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:49.941 14:18:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:49.941 14:18:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:49.941 ************************************ 00:16:49.941 START TEST lvs_grow_clean 00:16:49.941 ************************************ 00:16:49.941 14:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:16:49.941 14:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:49.941 14:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:49.941 14:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:49.941 14:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:49.941 14:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:49.942 14:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:49.942 14:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:49.942 14:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:49.942 14:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:50.199 14:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:50.199 14:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:50.456 14:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=09705f64-da0a-406d-8233-dba7a99b9900 00:16:50.456 14:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09705f64-da0a-406d-8233-dba7a99b9900 00:16:50.456 14:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:50.714 14:19:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:50.714 14:19:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:50.714 14:19:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 09705f64-da0a-406d-8233-dba7a99b9900 lvol 150 00:16:50.971 14:19:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ef91ed75-2a60-4c5f-ba83-b8bd0edc4817 00:16:50.971 14:19:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:50.971 14:19:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:51.229 [2024-07-10 14:19:00.669028] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:51.229 [2024-07-10 14:19:00.669162] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:51.229 true 00:16:51.229 14:19:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09705f64-da0a-406d-8233-dba7a99b9900 00:16:51.229 14:19:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:51.486 14:19:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:51.486 14:19:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:52.052 14:19:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ef91ed75-2a60-4c5f-ba83-b8bd0edc4817 00:16:52.309 14:19:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:52.309 [2024-07-10 14:19:01.780730] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.567 14:19:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:52.567 14:19:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1359178 00:16:52.567 14:19:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:52.567 14:19:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:52.567 14:19:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1359178 /var/tmp/bdevperf.sock 00:16:52.567 14:19:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1359178 ']' 00:16:52.567 14:19:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.567 14:19:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:52.567 14:19:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.567 14:19:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:52.567 14:19:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:52.825 [2024-07-10 14:19:02.116269] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:52.825 [2024-07-10 14:19:02.116446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1359178 ] 00:16:52.825 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.825 [2024-07-10 14:19:02.246540] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.083 [2024-07-10 14:19:02.487514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.648 14:19:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:53.648 14:19:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:16:53.648 14:19:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:54.211 Nvme0n1 00:16:54.211 14:19:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:54.468 [ 00:16:54.468 { 00:16:54.468 "name": "Nvme0n1", 00:16:54.468 "aliases": [ 00:16:54.468 "ef91ed75-2a60-4c5f-ba83-b8bd0edc4817" 00:16:54.468 ], 00:16:54.468 "product_name": "NVMe disk", 00:16:54.468 "block_size": 4096, 00:16:54.468 "num_blocks": 38912, 00:16:54.468 "uuid": "ef91ed75-2a60-4c5f-ba83-b8bd0edc4817", 00:16:54.468 "assigned_rate_limits": { 00:16:54.468 "rw_ios_per_sec": 0, 00:16:54.468 "rw_mbytes_per_sec": 0, 00:16:54.468 "r_mbytes_per_sec": 0, 00:16:54.468 "w_mbytes_per_sec": 0 00:16:54.468 }, 00:16:54.468 "claimed": false, 00:16:54.468 "zoned": false, 00:16:54.468 "supported_io_types": { 00:16:54.468 "read": true, 00:16:54.468 "write": true, 00:16:54.468 "unmap": true, 00:16:54.468 "flush": true, 00:16:54.468 "reset": true, 00:16:54.468 "nvme_admin": true, 00:16:54.468 "nvme_io": true, 00:16:54.468 "nvme_io_md": false, 00:16:54.468 "write_zeroes": true, 00:16:54.468 "zcopy": false, 00:16:54.468 "get_zone_info": false, 00:16:54.468 "zone_management": false, 00:16:54.468 "zone_append": false, 00:16:54.468 "compare": true, 00:16:54.468 "compare_and_write": true, 00:16:54.468 "abort": true, 00:16:54.468 "seek_hole": false, 00:16:54.468 "seek_data": false, 00:16:54.468 "copy": true, 00:16:54.468 "nvme_iov_md": false 00:16:54.468 }, 00:16:54.468 "memory_domains": [ 00:16:54.468 { 00:16:54.468 "dma_device_id": "system", 00:16:54.468 "dma_device_type": 1 00:16:54.468 } 00:16:54.468 ], 00:16:54.468 "driver_specific": { 00:16:54.468 "nvme": [ 00:16:54.468 { 00:16:54.468 "trid": { 00:16:54.468 "trtype": "TCP", 00:16:54.468 "adrfam": "IPv4", 00:16:54.468 "traddr": "10.0.0.2", 00:16:54.468 "trsvcid": "4420", 00:16:54.468 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:54.468 }, 00:16:54.468 "ctrlr_data": { 00:16:54.468 "cntlid": 1, 00:16:54.468 "vendor_id": "0x8086", 00:16:54.468 "model_number": "SPDK bdev Controller", 00:16:54.468 "serial_number": "SPDK0", 00:16:54.468 "firmware_revision": "24.09", 00:16:54.468 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:54.468 "oacs": { 00:16:54.468 "security": 0, 00:16:54.468 "format": 0, 00:16:54.468 "firmware": 0, 00:16:54.468 "ns_manage": 0 00:16:54.468 }, 00:16:54.468 "multi_ctrlr": true, 00:16:54.468 "ana_reporting": false 00:16:54.468 }, 00:16:54.468 "vs": { 00:16:54.468 "nvme_version": "1.3" 00:16:54.468 }, 00:16:54.468 "ns_data": { 00:16:54.468 "id": 1, 00:16:54.468 "can_share": true 00:16:54.468 } 00:16:54.468 } 00:16:54.468 ], 00:16:54.468 "mp_policy": "active_passive" 00:16:54.468 } 00:16:54.468 } 00:16:54.468 ] 00:16:54.468 14:19:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1359432 00:16:54.468 14:19:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:54.469 14:19:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:54.469 Running I/O for 10 seconds... 00:16:55.842 Latency(us) 00:16:55.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:55.843 Nvme0n1 : 1.00 11127.00 43.46 0.00 0.00 0.00 0.00 0.00 00:16:55.843 =================================================================================================================== 00:16:55.843 Total : 11127.00 43.46 0.00 0.00 0.00 0.00 0.00 00:16:55.843 00:16:56.408 14:19:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 09705f64-da0a-406d-8233-dba7a99b9900 00:16:56.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:56.666 Nvme0n1 : 2.00 11185.50 43.69 0.00 0.00 0.00 0.00 0.00 00:16:56.666 =================================================================================================================== 00:16:56.666 Total : 11185.50 43.69 0.00 0.00 0.00 0.00 0.00 00:16:56.666 00:16:56.666 true 00:16:56.666 14:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09705f64-da0a-406d-8233-dba7a99b9900 00:16:56.666 14:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:56.924 14:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:56.924 14:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:56.924 14:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1359432 00:16:57.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.489 Nvme0n1 : 3.00 11208.00 43.78 0.00 0.00 0.00 0.00 0.00 00:16:57.489 =================================================================================================================== 00:16:57.489 Total : 11208.00 43.78 0.00 0.00 0.00 0.00 0.00 00:16:57.489 00:16:58.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:58.863 Nvme0n1 : 4.00 11171.25 43.64 0.00 0.00 0.00 0.00 0.00 00:16:58.863 =================================================================================================================== 00:16:58.863 Total : 11171.25 43.64 0.00 0.00 0.00 0.00 0.00 00:16:58.863 00:16:59.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:59.797 Nvme0n1 : 5.00 11174.60 43.65 0.00 0.00 0.00 0.00 0.00 00:16:59.797 =================================================================================================================== 00:16:59.797 Total : 11174.60 43.65 0.00 0.00 0.00 0.00 0.00 00:16:59.797 00:17:00.730 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.730 Nvme0n1 : 6.00 11252.17 43.95 0.00 0.00 0.00 0.00 0.00 00:17:00.730 =================================================================================================================== 00:17:00.731 Total : 11252.17 43.95 0.00 0.00 0.00 0.00 0.00 00:17:00.731 00:17:01.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.663 Nvme0n1 : 7.00 11251.86 43.95 0.00 0.00 0.00 0.00 0.00 00:17:01.663 =================================================================================================================== 00:17:01.663 Total : 11251.86 43.95 0.00 0.00 0.00 0.00 0.00 00:17:01.663 00:17:02.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:02.597 Nvme0n1 : 8.00 11283.75 44.08 0.00 0.00 0.00 0.00 0.00 00:17:02.597 =================================================================================================================== 00:17:02.597 Total : 11283.75 44.08 0.00 0.00 0.00 0.00 0.00 00:17:02.597 00:17:03.529 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.529 Nvme0n1 : 9.00 11318.33 44.21 0.00 0.00 0.00 0.00 0.00 00:17:03.529 =================================================================================================================== 00:17:03.529 Total : 11318.33 44.21 0.00 0.00 0.00 0.00 0.00 00:17:03.529 00:17:04.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.462 Nvme0n1 : 10.00 11323.40 44.23 0.00 0.00 0.00 0.00 0.00 00:17:04.462 =================================================================================================================== 00:17:04.462 Total : 11323.40 44.23 0.00 0.00 0.00 0.00 0.00 00:17:04.462 00:17:04.720 00:17:04.721 Latency(us) 00:17:04.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.721 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.721 Nvme0n1 : 10.01 11321.68 44.23 0.00 0.00 11298.93 6699.24 22330.79 00:17:04.721 =================================================================================================================== 00:17:04.721 Total : 11321.68 44.23 0.00 0.00 11298.93 6699.24 22330.79 00:17:04.721 0 00:17:04.721 14:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1359178 00:17:04.721 14:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1359178 ']' 00:17:04.721 14:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1359178 00:17:04.721 14:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:17:04.721 14:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:04.721 14:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1359178 00:17:04.721 14:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:04.721 14:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:04.721 14:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1359178' 00:17:04.721 killing process with pid 1359178 00:17:04.721 14:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1359178 00:17:04.721 Received shutdown signal, test time was about 10.000000 seconds 00:17:04.721 00:17:04.721 Latency(us) 00:17:04.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.721 =================================================================================================================== 00:17:04.721 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:04.721 14:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1359178 00:17:05.655 14:19:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:05.914 14:19:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:06.172 14:19:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09705f64-da0a-406d-8233-dba7a99b9900 00:17:06.172 14:19:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:06.430 14:19:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:06.430 14:19:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:06.430 14:19:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:06.689 [2024-07-10 14:19:16.048524] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:06.689 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09705f64-da0a-406d-8233-dba7a99b9900 00:17:06.689 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:06.689 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09705f64-da0a-406d-8233-dba7a99b9900 00:17:06.689 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.689 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:06.689 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.689 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:06.689 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.689 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:06.689 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.689 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:06.689 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09705f64-da0a-406d-8233-dba7a99b9900 00:17:06.947 request: 00:17:06.947 { 00:17:06.947 "uuid": "09705f64-da0a-406d-8233-dba7a99b9900", 00:17:06.947 "method": "bdev_lvol_get_lvstores", 00:17:06.947 "req_id": 1 00:17:06.947 } 00:17:06.947 Got JSON-RPC error response 00:17:06.947 response: 00:17:06.947 { 00:17:06.947 "code": -19, 00:17:06.947 "message": "No such device" 00:17:06.947 } 00:17:06.947 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:06.947 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:06.947 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:06.947 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:06.947 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:07.205 aio_bdev 00:17:07.205 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ef91ed75-2a60-4c5f-ba83-b8bd0edc4817 00:17:07.205 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=ef91ed75-2a60-4c5f-ba83-b8bd0edc4817 00:17:07.205 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:07.205 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:17:07.205 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:07.205 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:07.205 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:07.769 14:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ef91ed75-2a60-4c5f-ba83-b8bd0edc4817 -t 2000 00:17:08.026 [ 00:17:08.026 { 00:17:08.026 "name": "ef91ed75-2a60-4c5f-ba83-b8bd0edc4817", 00:17:08.026 "aliases": [ 00:17:08.026 "lvs/lvol" 00:17:08.026 ], 00:17:08.026 "product_name": "Logical Volume", 00:17:08.026 "block_size": 4096, 00:17:08.026 "num_blocks": 38912, 00:17:08.026 "uuid": "ef91ed75-2a60-4c5f-ba83-b8bd0edc4817", 00:17:08.026 "assigned_rate_limits": { 00:17:08.026 "rw_ios_per_sec": 0, 00:17:08.026 "rw_mbytes_per_sec": 0, 00:17:08.026 "r_mbytes_per_sec": 0, 00:17:08.026 "w_mbytes_per_sec": 0 00:17:08.026 }, 00:17:08.026 "claimed": false, 00:17:08.026 "zoned": false, 00:17:08.026 "supported_io_types": { 00:17:08.026 "read": true, 00:17:08.026 "write": true, 00:17:08.026 "unmap": true, 00:17:08.026 "flush": false, 00:17:08.026 "reset": true, 00:17:08.026 "nvme_admin": false, 00:17:08.026 "nvme_io": false, 00:17:08.026 "nvme_io_md": false, 00:17:08.026 "write_zeroes": true, 00:17:08.026 "zcopy": false, 00:17:08.026 "get_zone_info": false, 00:17:08.026 "zone_management": false, 00:17:08.026 "zone_append": false, 00:17:08.026 "compare": false, 00:17:08.026 "compare_and_write": false, 00:17:08.026 "abort": false, 00:17:08.026 "seek_hole": true, 00:17:08.026 "seek_data": true, 00:17:08.026 "copy": false, 00:17:08.026 "nvme_iov_md": false 00:17:08.026 }, 00:17:08.026 "driver_specific": { 00:17:08.026 "lvol": { 00:17:08.026 "lvol_store_uuid": "09705f64-da0a-406d-8233-dba7a99b9900", 00:17:08.026 "base_bdev": "aio_bdev", 00:17:08.026 "thin_provision": false, 00:17:08.026 "num_allocated_clusters": 38, 00:17:08.026 "snapshot": false, 00:17:08.026 "clone": false, 00:17:08.026 "esnap_clone": false 00:17:08.026 } 00:17:08.026 } 00:17:08.026 } 00:17:08.026 ] 00:17:08.026 14:19:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:17:08.026 14:19:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09705f64-da0a-406d-8233-dba7a99b9900 00:17:08.026 14:19:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:08.283 14:19:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:08.283 14:19:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09705f64-da0a-406d-8233-dba7a99b9900 00:17:08.283 14:19:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:08.540 14:19:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:08.540 14:19:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ef91ed75-2a60-4c5f-ba83-b8bd0edc4817 00:17:08.798 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 09705f64-da0a-406d-8233-dba7a99b9900 00:17:09.055 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:09.313 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:09.313 00:17:09.313 real 0m19.249s 00:17:09.313 user 0m18.852s 00:17:09.313 sys 0m2.010s 00:17:09.313 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:09.313 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:09.313 ************************************ 00:17:09.313 END TEST lvs_grow_clean 00:17:09.313 ************************************ 00:17:09.313 14:19:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:09.313 14:19:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:09.313 14:19:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:09.313 14:19:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:09.313 14:19:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:09.313 ************************************ 00:17:09.313 START TEST lvs_grow_dirty 00:17:09.313 ************************************ 00:17:09.313 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:17:09.313 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:09.313 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:09.313 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:09.313 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:09.313 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:09.313 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:09.313 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:09.313 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:09.313 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:09.570 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:09.570 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:09.828 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=096eab7d-7794-4f61-8e51-b638c83bc931 00:17:09.828 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 096eab7d-7794-4f61-8e51-b638c83bc931 00:17:09.828 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:10.085 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:10.085 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:10.085 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 096eab7d-7794-4f61-8e51-b638c83bc931 lvol 150 00:17:10.342 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e8a5599d-da92-4bfe-b0a4-d1c4b70ef7a2 00:17:10.342 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:10.342 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:10.600 [2024-07-10 14:19:19.978087] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:10.600 [2024-07-10 14:19:19.978226] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:10.600 true 00:17:10.600 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 096eab7d-7794-4f61-8e51-b638c83bc931 00:17:10.600 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:10.859 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:10.859 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:11.117 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e8a5599d-da92-4bfe-b0a4-d1c4b70ef7a2 00:17:11.375 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:11.633 [2024-07-10 14:19:21.029516] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.633 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:11.920 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1361502 00:17:11.920 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:11.920 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:11.920 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1361502 /var/tmp/bdevperf.sock 00:17:11.920 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1361502 ']' 00:17:11.920 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.920 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:11.920 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.920 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:11.920 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:12.203 [2024-07-10 14:19:21.423869] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:17:12.204 [2024-07-10 14:19:21.424024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1361502 ] 00:17:12.204 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.204 [2024-07-10 14:19:21.553369] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.462 [2024-07-10 14:19:21.806571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.028 14:19:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.028 14:19:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:13.028 14:19:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:13.286 Nvme0n1 00:17:13.286 14:19:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:13.544 [ 00:17:13.544 { 00:17:13.544 "name": "Nvme0n1", 00:17:13.544 "aliases": [ 00:17:13.544 "e8a5599d-da92-4bfe-b0a4-d1c4b70ef7a2" 00:17:13.544 ], 00:17:13.544 "product_name": "NVMe disk", 00:17:13.544 "block_size": 4096, 00:17:13.544 "num_blocks": 38912, 00:17:13.544 "uuid": "e8a5599d-da92-4bfe-b0a4-d1c4b70ef7a2", 00:17:13.544 "assigned_rate_limits": { 00:17:13.544 "rw_ios_per_sec": 0, 00:17:13.544 "rw_mbytes_per_sec": 0, 00:17:13.544 "r_mbytes_per_sec": 0, 00:17:13.544 "w_mbytes_per_sec": 0 00:17:13.544 }, 00:17:13.544 "claimed": false, 00:17:13.544 "zoned": false, 00:17:13.544 "supported_io_types": { 00:17:13.544 "read": true, 00:17:13.544 "write": true, 00:17:13.544 "unmap": true, 00:17:13.544 "flush": true, 00:17:13.544 "reset": true, 00:17:13.544 "nvme_admin": true, 00:17:13.544 "nvme_io": true, 00:17:13.544 "nvme_io_md": false, 00:17:13.544 "write_zeroes": true, 00:17:13.544 "zcopy": false, 00:17:13.544 "get_zone_info": false, 00:17:13.544 "zone_management": false, 00:17:13.544 "zone_append": false, 00:17:13.544 "compare": true, 00:17:13.544 "compare_and_write": true, 00:17:13.544 "abort": true, 00:17:13.544 "seek_hole": false, 00:17:13.544 "seek_data": false, 00:17:13.544 "copy": true, 00:17:13.544 "nvme_iov_md": false 00:17:13.544 }, 00:17:13.544 "memory_domains": [ 00:17:13.544 { 00:17:13.544 "dma_device_id": "system", 00:17:13.544 "dma_device_type": 1 00:17:13.544 } 00:17:13.544 ], 00:17:13.544 "driver_specific": { 00:17:13.544 "nvme": [ 00:17:13.544 { 00:17:13.544 "trid": { 00:17:13.544 "trtype": "TCP", 00:17:13.544 "adrfam": "IPv4", 00:17:13.544 "traddr": "10.0.0.2", 00:17:13.544 "trsvcid": "4420", 00:17:13.544 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:13.544 }, 00:17:13.544 "ctrlr_data": { 00:17:13.544 "cntlid": 1, 00:17:13.544 "vendor_id": "0x8086", 00:17:13.544 "model_number": "SPDK bdev Controller", 00:17:13.544 "serial_number": "SPDK0", 00:17:13.544 "firmware_revision": "24.09", 00:17:13.544 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:13.544 "oacs": { 00:17:13.544 "security": 0, 00:17:13.544 "format": 0, 00:17:13.544 "firmware": 0, 00:17:13.544 "ns_manage": 0 00:17:13.544 }, 00:17:13.544 "multi_ctrlr": true, 00:17:13.544 "ana_reporting": false 00:17:13.544 }, 00:17:13.544 "vs": { 00:17:13.544 "nvme_version": "1.3" 00:17:13.544 }, 00:17:13.544 "ns_data": { 00:17:13.544 "id": 1, 00:17:13.544 "can_share": true 00:17:13.544 } 00:17:13.544 } 00:17:13.544 ], 00:17:13.544 "mp_policy": "active_passive" 00:17:13.544 } 00:17:13.544 } 00:17:13.544 ] 00:17:13.544 14:19:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1361742 00:17:13.544 14:19:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:13.544 14:19:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:13.800 Running I/O for 10 seconds... 00:17:14.733 Latency(us) 00:17:14.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.733 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:14.733 Nvme0n1 : 1.00 10870.00 42.46 0.00 0.00 0.00 0.00 0.00 00:17:14.733 =================================================================================================================== 00:17:14.733 Total : 10870.00 42.46 0.00 0.00 0.00 0.00 0.00 00:17:14.733 00:17:15.667 14:19:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 096eab7d-7794-4f61-8e51-b638c83bc931 00:17:15.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.667 Nvme0n1 : 2.00 11026.00 43.07 0.00 0.00 0.00 0.00 0.00 00:17:15.667 =================================================================================================================== 00:17:15.667 Total : 11026.00 43.07 0.00 0.00 0.00 0.00 0.00 00:17:15.667 00:17:15.925 true 00:17:15.925 14:19:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 096eab7d-7794-4f61-8e51-b638c83bc931 00:17:15.925 14:19:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:16.183 14:19:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:16.183 14:19:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:16.183 14:19:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1361742 00:17:16.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.749 Nvme0n1 : 3.00 11124.33 43.45 0.00 0.00 0.00 0.00 0.00 00:17:16.749 =================================================================================================================== 00:17:16.749 Total : 11124.33 43.45 0.00 0.00 0.00 0.00 0.00 00:17:16.749 00:17:17.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.685 Nvme0n1 : 4.00 11169.25 43.63 0.00 0.00 0.00 0.00 0.00 00:17:17.685 =================================================================================================================== 00:17:17.685 Total : 11169.25 43.63 0.00 0.00 0.00 0.00 0.00 00:17:17.685 00:17:19.060 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.060 Nvme0n1 : 5.00 11196.00 43.73 0.00 0.00 0.00 0.00 0.00 00:17:19.060 =================================================================================================================== 00:17:19.060 Total : 11196.00 43.73 0.00 0.00 0.00 0.00 0.00 00:17:19.060 00:17:19.995 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.995 Nvme0n1 : 6.00 11225.17 43.85 0.00 0.00 0.00 0.00 0.00 00:17:19.995 =================================================================================================================== 00:17:19.995 Total : 11225.17 43.85 0.00 0.00 0.00 0.00 0.00 00:17:19.995 00:17:20.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.930 Nvme0n1 : 7.00 11237.14 43.90 0.00 0.00 0.00 0.00 0.00 00:17:20.930 =================================================================================================================== 00:17:20.930 Total : 11237.14 43.90 0.00 0.00 0.00 0.00 0.00 00:17:20.930 00:17:21.866 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.866 Nvme0n1 : 8.00 11253.50 43.96 0.00 0.00 0.00 0.00 0.00 00:17:21.866 =================================================================================================================== 00:17:21.866 Total : 11253.50 43.96 0.00 0.00 0.00 0.00 0.00 00:17:21.866 00:17:22.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.801 Nvme0n1 : 9.00 11273.11 44.04 0.00 0.00 0.00 0.00 0.00 00:17:22.801 =================================================================================================================== 00:17:22.801 Total : 11273.11 44.04 0.00 0.00 0.00 0.00 0.00 00:17:22.801 00:17:23.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.734 Nvme0n1 : 10.00 11308.60 44.17 0.00 0.00 0.00 0.00 0.00 00:17:23.734 =================================================================================================================== 00:17:23.734 Total : 11308.60 44.17 0.00 0.00 0.00 0.00 0.00 00:17:23.734 00:17:23.734 00:17:23.734 Latency(us) 00:17:23.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.734 Nvme0n1 : 10.01 11311.59 44.19 0.00 0.00 11308.80 3325.35 30486.38 00:17:23.734 =================================================================================================================== 00:17:23.734 Total : 11311.59 44.19 0.00 0.00 11308.80 3325.35 30486.38 00:17:23.734 0 00:17:23.734 14:19:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1361502 00:17:23.734 14:19:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1361502 ']' 00:17:23.734 14:19:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1361502 00:17:23.734 14:19:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:17:23.734 14:19:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:23.734 14:19:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1361502 00:17:23.992 14:19:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:23.992 14:19:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:23.992 14:19:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1361502' 00:17:23.992 killing process with pid 1361502 00:17:23.992 14:19:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1361502 00:17:23.992 Received shutdown signal, test time was about 10.000000 seconds 00:17:23.992 00:17:23.992 Latency(us) 00:17:23.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.992 =================================================================================================================== 00:17:23.992 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:23.992 14:19:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1361502 00:17:24.926 14:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:25.183 14:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:25.441 14:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 096eab7d-7794-4f61-8e51-b638c83bc931 00:17:25.441 14:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:25.700 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:25.700 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:25.700 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1358722 00:17:25.700 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1358722 00:17:25.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1358722 Killed "${NVMF_APP[@]}" "$@" 00:17:25.700 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:25.700 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:25.700 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:25.700 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:25.700 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:25.700 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1363142 00:17:25.700 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:25.700 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1363142 00:17:25.700 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1363142 ']' 00:17:25.700 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.700 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:25.700 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.700 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:25.700 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:25.700 [2024-07-10 14:19:35.158172] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:17:25.700 [2024-07-10 14:19:35.158318] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.957 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.957 [2024-07-10 14:19:35.299055] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.214 [2024-07-10 14:19:35.532408] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.214 [2024-07-10 14:19:35.532492] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.214 [2024-07-10 14:19:35.532520] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.214 [2024-07-10 14:19:35.532545] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.214 [2024-07-10 14:19:35.532565] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.214 [2024-07-10 14:19:35.532626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.777 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.777 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:26.777 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:26.777 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:26.777 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:26.777 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.777 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:27.034 [2024-07-10 14:19:36.404478] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:27.034 [2024-07-10 14:19:36.404714] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:27.034 [2024-07-10 14:19:36.404799] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:27.034 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:27.034 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e8a5599d-da92-4bfe-b0a4-d1c4b70ef7a2 00:17:27.034 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=e8a5599d-da92-4bfe-b0a4-d1c4b70ef7a2 00:17:27.034 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:27.034 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:27.034 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:27.034 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:27.034 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:27.292 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e8a5599d-da92-4bfe-b0a4-d1c4b70ef7a2 -t 2000 00:17:27.549 [ 00:17:27.549 { 00:17:27.549 "name": "e8a5599d-da92-4bfe-b0a4-d1c4b70ef7a2", 00:17:27.549 "aliases": [ 00:17:27.549 "lvs/lvol" 00:17:27.549 ], 00:17:27.549 "product_name": "Logical Volume", 00:17:27.549 "block_size": 4096, 00:17:27.549 "num_blocks": 38912, 00:17:27.549 "uuid": "e8a5599d-da92-4bfe-b0a4-d1c4b70ef7a2", 00:17:27.549 "assigned_rate_limits": { 00:17:27.549 "rw_ios_per_sec": 0, 00:17:27.549 "rw_mbytes_per_sec": 0, 00:17:27.549 "r_mbytes_per_sec": 0, 00:17:27.549 "w_mbytes_per_sec": 0 00:17:27.549 }, 00:17:27.549 "claimed": false, 00:17:27.549 "zoned": false, 00:17:27.549 "supported_io_types": { 00:17:27.549 "read": true, 00:17:27.549 "write": true, 00:17:27.549 "unmap": true, 00:17:27.549 "flush": false, 00:17:27.549 "reset": true, 00:17:27.549 "nvme_admin": false, 00:17:27.549 "nvme_io": false, 00:17:27.549 "nvme_io_md": false, 00:17:27.549 "write_zeroes": true, 00:17:27.549 "zcopy": false, 00:17:27.549 "get_zone_info": false, 00:17:27.549 "zone_management": false, 00:17:27.549 "zone_append": false, 00:17:27.549 "compare": false, 00:17:27.549 "compare_and_write": false, 00:17:27.549 "abort": false, 00:17:27.549 "seek_hole": true, 00:17:27.549 "seek_data": true, 00:17:27.549 "copy": false, 00:17:27.549 "nvme_iov_md": false 00:17:27.549 }, 00:17:27.549 "driver_specific": { 00:17:27.549 "lvol": { 00:17:27.549 "lvol_store_uuid": "096eab7d-7794-4f61-8e51-b638c83bc931", 00:17:27.549 "base_bdev": "aio_bdev", 00:17:27.549 "thin_provision": false, 00:17:27.549 "num_allocated_clusters": 38, 00:17:27.549 "snapshot": false, 00:17:27.549 "clone": false, 00:17:27.549 "esnap_clone": false 00:17:27.549 } 00:17:27.549 } 00:17:27.549 } 00:17:27.549 ] 00:17:27.549 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:27.549 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 096eab7d-7794-4f61-8e51-b638c83bc931 00:17:27.549 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:27.807 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:27.807 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 096eab7d-7794-4f61-8e51-b638c83bc931 00:17:27.807 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:28.064 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:28.064 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:28.322 [2024-07-10 14:19:37.660956] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:28.322 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 096eab7d-7794-4f61-8e51-b638c83bc931 00:17:28.322 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:28.322 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 096eab7d-7794-4f61-8e51-b638c83bc931 00:17:28.322 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.322 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.322 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.322 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.322 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.322 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.322 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.322 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:28.322 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 096eab7d-7794-4f61-8e51-b638c83bc931 00:17:28.580 request: 00:17:28.580 { 00:17:28.580 "uuid": "096eab7d-7794-4f61-8e51-b638c83bc931", 00:17:28.580 "method": "bdev_lvol_get_lvstores", 00:17:28.580 "req_id": 1 00:17:28.580 } 00:17:28.580 Got JSON-RPC error response 00:17:28.580 response: 00:17:28.580 { 00:17:28.580 "code": -19, 00:17:28.580 "message": "No such device" 00:17:28.580 } 00:17:28.580 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:28.580 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:28.580 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:28.580 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:28.580 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:28.837 aio_bdev 00:17:28.837 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e8a5599d-da92-4bfe-b0a4-d1c4b70ef7a2 00:17:28.837 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=e8a5599d-da92-4bfe-b0a4-d1c4b70ef7a2 00:17:28.837 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:28.837 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:28.837 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:28.837 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:28.837 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:29.096 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e8a5599d-da92-4bfe-b0a4-d1c4b70ef7a2 -t 2000 00:17:29.354 [ 00:17:29.354 { 00:17:29.354 "name": "e8a5599d-da92-4bfe-b0a4-d1c4b70ef7a2", 00:17:29.354 "aliases": [ 00:17:29.354 "lvs/lvol" 00:17:29.354 ], 00:17:29.354 "product_name": "Logical Volume", 00:17:29.354 "block_size": 4096, 00:17:29.354 "num_blocks": 38912, 00:17:29.354 "uuid": "e8a5599d-da92-4bfe-b0a4-d1c4b70ef7a2", 00:17:29.354 "assigned_rate_limits": { 00:17:29.354 "rw_ios_per_sec": 0, 00:17:29.354 "rw_mbytes_per_sec": 0, 00:17:29.354 "r_mbytes_per_sec": 0, 00:17:29.354 "w_mbytes_per_sec": 0 00:17:29.354 }, 00:17:29.354 "claimed": false, 00:17:29.354 "zoned": false, 00:17:29.354 "supported_io_types": { 00:17:29.354 "read": true, 00:17:29.354 "write": true, 00:17:29.354 "unmap": true, 00:17:29.354 "flush": false, 00:17:29.354 "reset": true, 00:17:29.354 "nvme_admin": false, 00:17:29.354 "nvme_io": false, 00:17:29.354 "nvme_io_md": false, 00:17:29.354 "write_zeroes": true, 00:17:29.354 "zcopy": false, 00:17:29.354 "get_zone_info": false, 00:17:29.354 "zone_management": false, 00:17:29.354 "zone_append": false, 00:17:29.354 "compare": false, 00:17:29.354 "compare_and_write": false, 00:17:29.354 "abort": false, 00:17:29.354 "seek_hole": true, 00:17:29.354 "seek_data": true, 00:17:29.354 "copy": false, 00:17:29.354 "nvme_iov_md": false 00:17:29.354 }, 00:17:29.354 "driver_specific": { 00:17:29.354 "lvol": { 00:17:29.354 "lvol_store_uuid": "096eab7d-7794-4f61-8e51-b638c83bc931", 00:17:29.354 "base_bdev": "aio_bdev", 00:17:29.354 "thin_provision": false, 00:17:29.354 "num_allocated_clusters": 38, 00:17:29.354 "snapshot": false, 00:17:29.354 "clone": false, 00:17:29.354 "esnap_clone": false 00:17:29.354 } 00:17:29.354 } 00:17:29.354 } 00:17:29.354 ] 00:17:29.354 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:29.354 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 096eab7d-7794-4f61-8e51-b638c83bc931 00:17:29.354 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:29.611 14:19:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:29.611 14:19:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 096eab7d-7794-4f61-8e51-b638c83bc931 00:17:29.611 14:19:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:29.868 14:19:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:29.868 14:19:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e8a5599d-da92-4bfe-b0a4-d1c4b70ef7a2 00:17:30.128 14:19:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 096eab7d-7794-4f61-8e51-b638c83bc931 00:17:30.392 14:19:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:30.650 14:19:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:30.650 00:17:30.650 real 0m21.353s 00:17:30.650 user 0m54.363s 00:17:30.650 sys 0m4.775s 00:17:30.650 14:19:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:30.650 14:19:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:30.650 ************************************ 00:17:30.650 END TEST lvs_grow_dirty 00:17:30.650 ************************************ 00:17:30.650 14:19:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:30.650 14:19:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:30.650 14:19:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:17:30.650 14:19:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:17:30.650 14:19:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:30.650 14:19:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:30.650 14:19:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:30.650 14:19:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:30.650 14:19:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:30.650 14:19:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:30.650 nvmf_trace.0 00:17:30.650 14:19:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:17:30.650 14:19:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:30.650 14:19:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:30.650 14:19:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:30.650 14:19:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:30.650 14:19:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:30.650 14:19:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:30.650 14:19:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:30.650 rmmod nvme_tcp 00:17:30.650 rmmod nvme_fabrics 00:17:30.908 rmmod nvme_keyring 00:17:30.908 14:19:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:30.908 14:19:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:30.908 14:19:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:30.908 14:19:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1363142 ']' 00:17:30.908 14:19:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1363142 00:17:30.908 14:19:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1363142 ']' 00:17:30.908 14:19:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1363142 00:17:30.908 14:19:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:17:30.908 14:19:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:30.908 14:19:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1363142 00:17:30.908 14:19:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:30.908 14:19:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:30.908 14:19:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1363142' 00:17:30.908 killing process with pid 1363142 00:17:30.908 14:19:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1363142 00:17:30.908 14:19:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1363142 00:17:32.283 14:19:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:32.284 14:19:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:32.284 14:19:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:32.284 14:19:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:32.284 14:19:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:32.284 14:19:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.284 14:19:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.284 14:19:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.190 14:19:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:34.190 00:17:34.190 real 0m47.536s 00:17:34.190 user 1m20.701s 00:17:34.190 sys 0m8.730s 00:17:34.190 14:19:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:34.190 14:19:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:34.190 ************************************ 00:17:34.190 END TEST nvmf_lvs_grow 00:17:34.190 ************************************ 00:17:34.190 14:19:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:34.190 14:19:43 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:34.190 14:19:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:34.190 14:19:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:34.190 14:19:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:34.190 ************************************ 00:17:34.190 START TEST nvmf_bdev_io_wait 00:17:34.190 ************************************ 00:17:34.190 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:34.190 * Looking for test storage... 00:17:34.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:34.191 14:19:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:36.093 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:36.093 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:36.093 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:36.093 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:36.093 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:36.093 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:36.093 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:36.093 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:36.093 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:36.093 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:36.094 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:36.094 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:36.094 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:36.094 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:36.094 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:36.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:17:36.353 00:17:36.353 --- 10.0.0.2 ping statistics --- 00:17:36.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.353 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:36.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:17:36.353 00:17:36.353 --- 10.0.0.1 ping statistics --- 00:17:36.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.353 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1365861 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1365861 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1365861 ']' 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.353 14:19:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:36.353 [2024-07-10 14:19:45.742871] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:17:36.353 [2024-07-10 14:19:45.743008] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.353 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.612 [2024-07-10 14:19:45.874793] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:36.870 [2024-07-10 14:19:46.132751] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.870 [2024-07-10 14:19:46.132819] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.870 [2024-07-10 14:19:46.132846] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.870 [2024-07-10 14:19:46.132866] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.870 [2024-07-10 14:19:46.132887] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.870 [2024-07-10 14:19:46.133010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.870 [2024-07-10 14:19:46.133076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.870 [2024-07-10 14:19:46.133156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.870 [2024-07-10 14:19:46.133167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:37.435 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.435 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:17:37.435 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:37.435 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:37.435 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.435 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.435 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:37.435 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.435 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.435 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.435 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:37.435 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.435 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.694 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.694 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:37.694 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.694 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.694 [2024-07-10 14:19:46.948946] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.694 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.694 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:37.694 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.694 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.694 Malloc0 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.694 [2024-07-10 14:19:47.060800] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1366017 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1366019 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:37.694 { 00:17:37.694 "params": { 00:17:37.694 "name": "Nvme$subsystem", 00:17:37.694 "trtype": "$TEST_TRANSPORT", 00:17:37.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:37.694 "adrfam": "ipv4", 00:17:37.694 "trsvcid": "$NVMF_PORT", 00:17:37.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:37.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:37.694 "hdgst": ${hdgst:-false}, 00:17:37.694 "ddgst": ${ddgst:-false} 00:17:37.694 }, 00:17:37.694 "method": "bdev_nvme_attach_controller" 00:17:37.694 } 00:17:37.694 EOF 00:17:37.694 )") 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1366021 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:37.694 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:37.694 { 00:17:37.694 "params": { 00:17:37.694 "name": "Nvme$subsystem", 00:17:37.694 "trtype": "$TEST_TRANSPORT", 00:17:37.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:37.694 "adrfam": "ipv4", 00:17:37.695 "trsvcid": "$NVMF_PORT", 00:17:37.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:37.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:37.695 "hdgst": ${hdgst:-false}, 00:17:37.695 "ddgst": ${ddgst:-false} 00:17:37.695 }, 00:17:37.695 "method": "bdev_nvme_attach_controller" 00:17:37.695 } 00:17:37.695 EOF 00:17:37.695 )") 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1366024 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:37.695 { 00:17:37.695 "params": { 00:17:37.695 "name": "Nvme$subsystem", 00:17:37.695 "trtype": "$TEST_TRANSPORT", 00:17:37.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:37.695 "adrfam": "ipv4", 00:17:37.695 "trsvcid": "$NVMF_PORT", 00:17:37.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:37.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:37.695 "hdgst": ${hdgst:-false}, 00:17:37.695 "ddgst": ${ddgst:-false} 00:17:37.695 }, 00:17:37.695 "method": "bdev_nvme_attach_controller" 00:17:37.695 } 00:17:37.695 EOF 00:17:37.695 )") 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:37.695 { 00:17:37.695 "params": { 00:17:37.695 "name": "Nvme$subsystem", 00:17:37.695 "trtype": "$TEST_TRANSPORT", 00:17:37.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:37.695 "adrfam": "ipv4", 00:17:37.695 "trsvcid": "$NVMF_PORT", 00:17:37.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:37.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:37.695 "hdgst": ${hdgst:-false}, 00:17:37.695 "ddgst": ${ddgst:-false} 00:17:37.695 }, 00:17:37.695 "method": "bdev_nvme_attach_controller" 00:17:37.695 } 00:17:37.695 EOF 00:17:37.695 )") 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1366017 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:37.695 "params": { 00:17:37.695 "name": "Nvme1", 00:17:37.695 "trtype": "tcp", 00:17:37.695 "traddr": "10.0.0.2", 00:17:37.695 "adrfam": "ipv4", 00:17:37.695 "trsvcid": "4420", 00:17:37.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.695 "hdgst": false, 00:17:37.695 "ddgst": false 00:17:37.695 }, 00:17:37.695 "method": "bdev_nvme_attach_controller" 00:17:37.695 }' 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:37.695 "params": { 00:17:37.695 "name": "Nvme1", 00:17:37.695 "trtype": "tcp", 00:17:37.695 "traddr": "10.0.0.2", 00:17:37.695 "adrfam": "ipv4", 00:17:37.695 "trsvcid": "4420", 00:17:37.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.695 "hdgst": false, 00:17:37.695 "ddgst": false 00:17:37.695 }, 00:17:37.695 "method": "bdev_nvme_attach_controller" 00:17:37.695 }' 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:37.695 "params": { 00:17:37.695 "name": "Nvme1", 00:17:37.695 "trtype": "tcp", 00:17:37.695 "traddr": "10.0.0.2", 00:17:37.695 "adrfam": "ipv4", 00:17:37.695 "trsvcid": "4420", 00:17:37.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.695 "hdgst": false, 00:17:37.695 "ddgst": false 00:17:37.695 }, 00:17:37.695 "method": "bdev_nvme_attach_controller" 00:17:37.695 }' 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:37.695 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:37.695 "params": { 00:17:37.695 "name": "Nvme1", 00:17:37.695 "trtype": "tcp", 00:17:37.695 "traddr": "10.0.0.2", 00:17:37.695 "adrfam": "ipv4", 00:17:37.695 "trsvcid": "4420", 00:17:37.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.695 "hdgst": false, 00:17:37.695 "ddgst": false 00:17:37.695 }, 00:17:37.695 "method": "bdev_nvme_attach_controller" 00:17:37.695 }' 00:17:37.695 [2024-07-10 14:19:47.145552] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:17:37.695 [2024-07-10 14:19:47.145555] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:17:37.695 [2024-07-10 14:19:47.145693] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:37.695 [2024-07-10 14:19:47.145701] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:37.695 [2024-07-10 14:19:47.147908] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:17:37.695 [2024-07-10 14:19:47.147943] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:17:37.695 [2024-07-10 14:19:47.148050] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-10 14:19:47.148059] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:37.695 --proc-type=auto ] 00:17:37.953 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.953 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.953 [2024-07-10 14:19:47.384487] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.211 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.211 [2024-07-10 14:19:47.490112] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.211 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.211 [2024-07-10 14:19:47.563257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.211 [2024-07-10 14:19:47.608021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:38.211 [2024-07-10 14:19:47.641935] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.469 [2024-07-10 14:19:47.751558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:38.469 [2024-07-10 14:19:47.782082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:38.469 [2024-07-10 14:19:47.860736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:17:38.727 Running I/O for 1 seconds... 00:17:38.985 Running I/O for 1 seconds... 00:17:38.985 Running I/O for 1 seconds... 00:17:38.985 Running I/O for 1 seconds... 00:17:39.982 00:17:39.982 Latency(us) 00:17:39.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.982 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:39.982 Nvme1n1 : 1.06 4634.19 18.10 0.00 0.00 26268.15 5946.79 64468.01 00:17:39.982 =================================================================================================================== 00:17:39.982 Total : 4634.19 18.10 0.00 0.00 26268.15 5946.79 64468.01 00:17:39.982 00:17:39.982 Latency(us) 00:17:39.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.982 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:39.982 Nvme1n1 : 1.01 7175.25 28.03 0.00 0.00 17734.61 3373.89 26991.12 00:17:39.982 =================================================================================================================== 00:17:39.982 Total : 7175.25 28.03 0.00 0.00 17734.61 3373.89 26991.12 00:17:39.982 00:17:39.982 Latency(us) 00:17:39.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.983 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:39.983 Nvme1n1 : 1.01 4840.23 18.91 0.00 0.00 26339.28 6699.24 52428.80 00:17:39.983 =================================================================================================================== 00:17:39.983 Total : 4840.23 18.91 0.00 0.00 26339.28 6699.24 52428.80 00:17:39.983 00:17:39.983 Latency(us) 00:17:39.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.983 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:39.983 Nvme1n1 : 1.00 127967.57 499.87 0.00 0.00 996.66 348.92 2524.35 00:17:39.983 =================================================================================================================== 00:17:39.983 Total : 127967.57 499.87 0.00 0.00 996.66 348.92 2524.35 00:17:40.917 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1366019 00:17:40.917 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1366021 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1366024 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:41.182 rmmod nvme_tcp 00:17:41.182 rmmod nvme_fabrics 00:17:41.182 rmmod nvme_keyring 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1365861 ']' 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1365861 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1365861 ']' 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1365861 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1365861 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1365861' 00:17:41.182 killing process with pid 1365861 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1365861 00:17:41.182 14:19:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1365861 00:17:42.560 14:19:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:42.560 14:19:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:42.560 14:19:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:42.560 14:19:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.560 14:19:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:42.560 14:19:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.560 14:19:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.560 14:19:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.463 14:19:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:44.463 00:17:44.463 real 0m10.264s 00:17:44.463 user 0m31.457s 00:17:44.463 sys 0m4.057s 00:17:44.463 14:19:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:44.463 14:19:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:44.463 ************************************ 00:17:44.463 END TEST nvmf_bdev_io_wait 00:17:44.463 ************************************ 00:17:44.463 14:19:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:44.463 14:19:53 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:44.463 14:19:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:44.463 14:19:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:44.463 14:19:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:44.463 ************************************ 00:17:44.463 START TEST nvmf_queue_depth 00:17:44.463 ************************************ 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:44.463 * Looking for test storage... 00:17:44.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.463 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:44.464 14:19:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:46.368 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:46.368 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:46.368 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:46.368 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:46.368 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:46.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:17:46.369 00:17:46.369 --- 10.0.0.2 ping statistics --- 00:17:46.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.369 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:17:46.369 00:17:46.369 --- 10.0.0.1 ping statistics --- 00:17:46.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.369 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1368504 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1368504 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1368504 ']' 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.369 14:19:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:46.627 [2024-07-10 14:19:55.882784] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:17:46.627 [2024-07-10 14:19:55.882919] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.627 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.627 [2024-07-10 14:19:56.024466] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.886 [2024-07-10 14:19:56.281906] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.886 [2024-07-10 14:19:56.281994] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.886 [2024-07-10 14:19:56.282023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.886 [2024-07-10 14:19:56.282048] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.886 [2024-07-10 14:19:56.282071] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.886 [2024-07-10 14:19:56.282120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.452 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.452 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:47.452 14:19:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:47.452 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:47.452 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:47.452 14:19:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.452 14:19:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:47.452 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.452 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:47.452 [2024-07-10 14:19:56.860991] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.452 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.452 14:19:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:47.452 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.452 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:47.710 Malloc0 00:17:47.710 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.710 14:19:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:47.710 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.710 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:47.710 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.710 14:19:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:47.710 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.710 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:47.710 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.711 14:19:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.711 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.711 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:47.711 [2024-07-10 14:19:56.978016] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.711 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.711 14:19:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1368658 00:17:47.711 14:19:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:47.711 14:19:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:47.711 14:19:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1368658 /var/tmp/bdevperf.sock 00:17:47.711 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1368658 ']' 00:17:47.711 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:47.711 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.711 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:47.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:47.711 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.711 14:19:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:47.711 [2024-07-10 14:19:57.059734] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:17:47.711 [2024-07-10 14:19:57.059892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1368658 ] 00:17:47.711 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.711 [2024-07-10 14:19:57.183510] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.969 [2024-07-10 14:19:57.418391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.903 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:48.903 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:48.903 14:19:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:48.903 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.903 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:48.903 NVMe0n1 00:17:48.903 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.903 14:19:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:48.903 Running I/O for 10 seconds... 00:18:01.103 00:18:01.103 Latency(us) 00:18:01.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.103 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:01.103 Verification LBA range: start 0x0 length 0x4000 00:18:01.103 NVMe0n1 : 10.13 6123.11 23.92 0.00 0.00 166121.72 27379.48 107964.49 00:18:01.103 =================================================================================================================== 00:18:01.103 Total : 6123.11 23.92 0.00 0.00 166121.72 27379.48 107964.49 00:18:01.103 0 00:18:01.103 14:20:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1368658 00:18:01.103 14:20:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1368658 ']' 00:18:01.103 14:20:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1368658 00:18:01.103 14:20:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:01.103 14:20:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:01.103 14:20:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1368658 00:18:01.103 14:20:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:01.103 14:20:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:01.103 14:20:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1368658' 00:18:01.103 killing process with pid 1368658 00:18:01.103 14:20:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1368658 00:18:01.103 Received shutdown signal, test time was about 10.000000 seconds 00:18:01.103 00:18:01.103 Latency(us) 00:18:01.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.103 =================================================================================================================== 00:18:01.103 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:01.103 14:20:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1368658 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:01.103 rmmod nvme_tcp 00:18:01.103 rmmod nvme_fabrics 00:18:01.103 rmmod nvme_keyring 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1368504 ']' 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1368504 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1368504 ']' 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1368504 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1368504 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1368504' 00:18:01.103 killing process with pid 1368504 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1368504 00:18:01.103 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1368504 00:18:02.035 14:20:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:02.035 14:20:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:02.035 14:20:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:02.035 14:20:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:02.035 14:20:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:02.035 14:20:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.035 14:20:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.035 14:20:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.935 14:20:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:03.935 00:18:03.935 real 0m19.389s 00:18:03.935 user 0m28.008s 00:18:03.935 sys 0m3.040s 00:18:03.935 14:20:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:03.935 14:20:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:03.935 ************************************ 00:18:03.935 END TEST nvmf_queue_depth 00:18:03.935 ************************************ 00:18:03.935 14:20:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:03.935 14:20:13 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:03.935 14:20:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:03.935 14:20:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:03.935 14:20:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:03.935 ************************************ 00:18:03.935 START TEST nvmf_target_multipath 00:18:03.935 ************************************ 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:03.935 * Looking for test storage... 00:18:03.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.935 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:03.936 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:03.936 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:03.936 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.936 14:20:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.936 14:20:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.936 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:03.936 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:03.936 14:20:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:03.936 14:20:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:05.835 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:05.835 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:05.835 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:05.836 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:05.836 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:05.836 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:06.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:06.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:18:06.094 00:18:06.094 --- 10.0.0.2 ping statistics --- 00:18:06.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.094 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:06.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:06.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:18:06.094 00:18:06.094 --- 10.0.0.1 ping statistics --- 00:18:06.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.094 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:06.094 only one NIC for nvmf test 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:06.094 rmmod nvme_tcp 00:18:06.094 rmmod nvme_fabrics 00:18:06.094 rmmod nvme_keyring 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.094 14:20:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.628 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:08.628 14:20:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:08.628 14:20:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:08.628 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:08.628 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:08.628 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:08.628 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:08.628 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:08.628 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:08.628 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:08.628 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:08.628 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:08.628 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:08.628 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:08.628 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:08.628 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:08.629 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:08.629 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:08.629 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.629 14:20:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.629 14:20:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.629 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:08.629 00:18:08.629 real 0m4.335s 00:18:08.629 user 0m0.815s 00:18:08.629 sys 0m1.497s 00:18:08.629 14:20:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:08.629 14:20:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:08.629 ************************************ 00:18:08.629 END TEST nvmf_target_multipath 00:18:08.629 ************************************ 00:18:08.629 14:20:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:08.629 14:20:17 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:08.629 14:20:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:08.629 14:20:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:08.629 14:20:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:08.629 ************************************ 00:18:08.629 START TEST nvmf_zcopy 00:18:08.629 ************************************ 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:08.629 * Looking for test storage... 00:18:08.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:08.629 14:20:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:10.531 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:10.531 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:10.531 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:10.531 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:10.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:18:10.531 00:18:10.531 --- 10.0.0.2 ping statistics --- 00:18:10.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.531 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:18:10.531 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:10.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:18:10.531 00:18:10.532 --- 10.0.0.1 ping statistics --- 00:18:10.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.532 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1374092 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1374092 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1374092 ']' 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.532 14:20:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:10.532 [2024-07-10 14:20:19.920724] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:18:10.532 [2024-07-10 14:20:19.920877] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.790 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.790 [2024-07-10 14:20:20.078118] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.048 [2024-07-10 14:20:20.343060] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.048 [2024-07-10 14:20:20.343133] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.048 [2024-07-10 14:20:20.343162] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.048 [2024-07-10 14:20:20.343187] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.048 [2024-07-10 14:20:20.343209] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.048 [2024-07-10 14:20:20.343270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.614 [2024-07-10 14:20:20.876315] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.614 [2024-07-10 14:20:20.892524] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.614 malloc0 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:11.614 { 00:18:11.614 "params": { 00:18:11.614 "name": "Nvme$subsystem", 00:18:11.614 "trtype": "$TEST_TRANSPORT", 00:18:11.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:11.614 "adrfam": "ipv4", 00:18:11.614 "trsvcid": "$NVMF_PORT", 00:18:11.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:11.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:11.614 "hdgst": ${hdgst:-false}, 00:18:11.614 "ddgst": ${ddgst:-false} 00:18:11.614 }, 00:18:11.614 "method": "bdev_nvme_attach_controller" 00:18:11.614 } 00:18:11.614 EOF 00:18:11.614 )") 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:11.614 14:20:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:11.614 "params": { 00:18:11.614 "name": "Nvme1", 00:18:11.614 "trtype": "tcp", 00:18:11.614 "traddr": "10.0.0.2", 00:18:11.614 "adrfam": "ipv4", 00:18:11.614 "trsvcid": "4420", 00:18:11.614 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.614 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.614 "hdgst": false, 00:18:11.614 "ddgst": false 00:18:11.614 }, 00:18:11.614 "method": "bdev_nvme_attach_controller" 00:18:11.614 }' 00:18:11.614 [2024-07-10 14:20:21.057013] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:18:11.614 [2024-07-10 14:20:21.057164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1374245 ] 00:18:11.873 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.873 [2024-07-10 14:20:21.185421] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.131 [2024-07-10 14:20:21.450883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.698 Running I/O for 10 seconds... 00:18:22.677 00:18:22.677 Latency(us) 00:18:22.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.677 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:22.677 Verification LBA range: start 0x0 length 0x1000 00:18:22.677 Nvme1n1 : 10.02 4321.38 33.76 0.00 0.00 29538.16 940.56 39612.87 00:18:22.677 =================================================================================================================== 00:18:22.677 Total : 4321.38 33.76 0.00 0.00 29538.16 940.56 39612.87 00:18:23.612 14:20:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1375569 00:18:23.612 14:20:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:23.612 14:20:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:23.612 14:20:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:23.612 14:20:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:23.612 14:20:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:23.612 14:20:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:23.612 14:20:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:23.612 14:20:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:23.612 { 00:18:23.612 "params": { 00:18:23.612 "name": "Nvme$subsystem", 00:18:23.612 "trtype": "$TEST_TRANSPORT", 00:18:23.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.612 "adrfam": "ipv4", 00:18:23.612 "trsvcid": "$NVMF_PORT", 00:18:23.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.612 "hdgst": ${hdgst:-false}, 00:18:23.612 "ddgst": ${ddgst:-false} 00:18:23.612 }, 00:18:23.612 "method": "bdev_nvme_attach_controller" 00:18:23.612 } 00:18:23.612 EOF 00:18:23.612 )") 00:18:23.612 14:20:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:23.612 [2024-07-10 14:20:33.023105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.612 [2024-07-10 14:20:33.023173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.612 14:20:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:23.612 14:20:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:23.612 14:20:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:23.612 "params": { 00:18:23.612 "name": "Nvme1", 00:18:23.612 "trtype": "tcp", 00:18:23.612 "traddr": "10.0.0.2", 00:18:23.612 "adrfam": "ipv4", 00:18:23.612 "trsvcid": "4420", 00:18:23.612 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.612 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:23.612 "hdgst": false, 00:18:23.612 "ddgst": false 00:18:23.612 }, 00:18:23.612 "method": "bdev_nvme_attach_controller" 00:18:23.612 }' 00:18:23.612 [2024-07-10 14:20:33.031013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.613 [2024-07-10 14:20:33.031057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.613 [2024-07-10 14:20:33.039043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.613 [2024-07-10 14:20:33.039077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.613 [2024-07-10 14:20:33.047067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.613 [2024-07-10 14:20:33.047103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.613 [2024-07-10 14:20:33.055073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.613 [2024-07-10 14:20:33.055110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.613 [2024-07-10 14:20:33.063144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.613 [2024-07-10 14:20:33.063182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.613 [2024-07-10 14:20:33.071129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.613 [2024-07-10 14:20:33.071162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.613 [2024-07-10 14:20:33.079132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.613 [2024-07-10 14:20:33.079164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.613 [2024-07-10 14:20:33.087190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.613 [2024-07-10 14:20:33.087221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.095176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.095211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.103213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.103246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.104839] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:18:23.871 [2024-07-10 14:20:33.104967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1375569 ] 00:18:23.871 [2024-07-10 14:20:33.111213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.111241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.119228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.119257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.127261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.127289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.135295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.135327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.143291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.143319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.151329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.151356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.159331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.159358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.167367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.167394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.175407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.175445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.871 [2024-07-10 14:20:33.183451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.183480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.191468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.191499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.199496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.199535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.207494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.207523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.215533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.215561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.223551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.223580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.231578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.231607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.239591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.239620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.242068] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.871 [2024-07-10 14:20:33.247625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.247655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.255784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.255833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.263701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.263746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.271690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.271734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.279759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.279812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.287748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.287793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.295800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.295827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.303826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.303853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.311831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.311858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.319868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.319896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.327877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.327904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.335880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.335906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.343915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.343941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.871 [2024-07-10 14:20:33.351924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.871 [2024-07-10 14:20:33.351953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.359963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.359992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.367982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.368010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.376030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.376062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.384114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.384178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.392064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.392096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.400053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.400080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.408092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.408120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.416098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.416124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.424135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.424162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.432156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.432183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.440160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.440186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.448201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.448229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.456223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.456250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.464226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.464253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.472284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.472311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.480282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.480309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.488311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.488338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.496330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.496357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.504240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.130 [2024-07-10 14:20:33.504358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.504386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.512387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.512436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.520511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.520555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.528510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.528556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.536491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.536523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.544495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.544525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.552518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.552547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.560550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.560579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.568569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.568597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.576567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.576595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.584590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.584619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.592677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.592740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.600753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.600816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.130 [2024-07-10 14:20:33.608748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.130 [2024-07-10 14:20:33.608815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.620845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.620913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.628742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.628771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.636739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.636767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.644770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.644798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.652790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.652816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.660815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.660842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.668867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.668894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.676837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.676863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.684906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.684933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.692932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.692959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.700914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.700941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.708954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.708981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.716974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.717009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.724983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.725009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.733019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.733045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.741067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.741104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.749161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.749213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.757158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.757201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.765161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.765196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.773183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.773216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.781186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.781219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.789202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.789235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.388 [2024-07-10 14:20:33.797253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.388 [2024-07-10 14:20:33.797287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.389 [2024-07-10 14:20:33.805231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.389 [2024-07-10 14:20:33.805263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.389 [2024-07-10 14:20:33.813270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.389 [2024-07-10 14:20:33.813302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.389 [2024-07-10 14:20:33.821296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.389 [2024-07-10 14:20:33.821329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.389 [2024-07-10 14:20:33.829302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.389 [2024-07-10 14:20:33.829334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.389 [2024-07-10 14:20:33.837338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.389 [2024-07-10 14:20:33.837370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.389 [2024-07-10 14:20:33.845362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.389 [2024-07-10 14:20:33.845394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.389 [2024-07-10 14:20:33.853370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.389 [2024-07-10 14:20:33.853402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.389 [2024-07-10 14:20:33.861440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.389 [2024-07-10 14:20:33.861487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.389 [2024-07-10 14:20:33.869419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.389 [2024-07-10 14:20:33.869469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:33.877484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:33.877515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:33.885510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:33.885543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:33.893513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:33.893559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:33.901554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:33.901584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:33.909587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:33.909618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:33.917589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:33.917618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:33.925623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:33.925652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:33.933604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:33.933647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:33.941646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:33.941689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:33.949677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:33.949724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:33.957728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:33.957765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:33.965759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:33.965795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:33.973766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:33.973815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:33.981873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:33.981913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:33.989898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:33.989933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 Running I/O for 5 seconds... 00:18:24.647 [2024-07-10 14:20:33.997881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:33.997926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:34.015045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:34.015087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:34.030276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:34.030316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:34.045367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:34.045406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:34.060321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:34.060361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:34.075435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:34.075493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:34.091073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:34.091114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:34.106050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:34.106090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.647 [2024-07-10 14:20:34.120701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.647 [2024-07-10 14:20:34.120756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.906 [2024-07-10 14:20:34.136746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.906 [2024-07-10 14:20:34.136786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.906 [2024-07-10 14:20:34.152379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.906 [2024-07-10 14:20:34.152418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.906 [2024-07-10 14:20:34.167314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.906 [2024-07-10 14:20:34.167354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.906 [2024-07-10 14:20:34.182360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.906 [2024-07-10 14:20:34.182417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.906 [2024-07-10 14:20:34.197179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.906 [2024-07-10 14:20:34.197218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.906 [2024-07-10 14:20:34.212657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.906 [2024-07-10 14:20:34.212691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.906 [2024-07-10 14:20:34.227803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.906 [2024-07-10 14:20:34.227842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.906 [2024-07-10 14:20:34.242334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.906 [2024-07-10 14:20:34.242373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.906 [2024-07-10 14:20:34.257482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.906 [2024-07-10 14:20:34.257517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.906 [2024-07-10 14:20:34.272439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.906 [2024-07-10 14:20:34.272491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.906 [2024-07-10 14:20:34.288065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.906 [2024-07-10 14:20:34.288105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.906 [2024-07-10 14:20:34.303990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.906 [2024-07-10 14:20:34.304031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.906 [2024-07-10 14:20:34.318449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.906 [2024-07-10 14:20:34.318504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.906 [2024-07-10 14:20:34.332925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.906 [2024-07-10 14:20:34.332966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.906 [2024-07-10 14:20:34.347873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.906 [2024-07-10 14:20:34.347912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.906 [2024-07-10 14:20:34.362609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.906 [2024-07-10 14:20:34.362646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.906 [2024-07-10 14:20:34.378025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.906 [2024-07-10 14:20:34.378065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.193 [2024-07-10 14:20:34.396281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.193 [2024-07-10 14:20:34.396334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.193 [2024-07-10 14:20:34.415229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.193 [2024-07-10 14:20:34.415270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.193 [2024-07-10 14:20:34.431008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.193 [2024-07-10 14:20:34.431056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.193 [2024-07-10 14:20:34.446722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.193 [2024-07-10 14:20:34.446761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.193 [2024-07-10 14:20:34.462489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.193 [2024-07-10 14:20:34.462524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.193 [2024-07-10 14:20:34.477710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.193 [2024-07-10 14:20:34.477767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.193 [2024-07-10 14:20:34.493058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.193 [2024-07-10 14:20:34.493096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.193 [2024-07-10 14:20:34.507801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.193 [2024-07-10 14:20:34.507841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.193 [2024-07-10 14:20:34.522581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.193 [2024-07-10 14:20:34.522616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.193 [2024-07-10 14:20:34.537389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.193 [2024-07-10 14:20:34.537435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.193 [2024-07-10 14:20:34.552145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.193 [2024-07-10 14:20:34.552184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.193 [2024-07-10 14:20:34.566654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.193 [2024-07-10 14:20:34.566689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.193 [2024-07-10 14:20:34.582434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.193 [2024-07-10 14:20:34.582488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.193 [2024-07-10 14:20:34.597051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.193 [2024-07-10 14:20:34.597090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.193 [2024-07-10 14:20:34.611945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.193 [2024-07-10 14:20:34.611984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.193 [2024-07-10 14:20:34.626814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.193 [2024-07-10 14:20:34.626854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.193 [2024-07-10 14:20:34.642031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.193 [2024-07-10 14:20:34.642070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.193 [2024-07-10 14:20:34.659251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.193 [2024-07-10 14:20:34.659294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.491 [2024-07-10 14:20:34.680631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.492 [2024-07-10 14:20:34.680678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.492 [2024-07-10 14:20:34.695681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.492 [2024-07-10 14:20:34.695736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.492 [2024-07-10 14:20:34.710647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.492 [2024-07-10 14:20:34.710683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.492 [2024-07-10 14:20:34.725961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.492 [2024-07-10 14:20:34.726010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.492 [2024-07-10 14:20:34.740875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.492 [2024-07-10 14:20:34.740915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.492 [2024-07-10 14:20:34.755945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.492 [2024-07-10 14:20:34.755985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.492 [2024-07-10 14:20:34.770661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.492 [2024-07-10 14:20:34.770702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.492 [2024-07-10 14:20:34.786542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.492 [2024-07-10 14:20:34.786594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.492 [2024-07-10 14:20:34.802095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.492 [2024-07-10 14:20:34.802134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.492 [2024-07-10 14:20:34.817050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.492 [2024-07-10 14:20:34.817089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.492 [2024-07-10 14:20:34.832254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.492 [2024-07-10 14:20:34.832293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.492 [2024-07-10 14:20:34.846974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.492 [2024-07-10 14:20:34.847013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.492 [2024-07-10 14:20:34.861654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.492 [2024-07-10 14:20:34.861689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.492 [2024-07-10 14:20:34.875992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.492 [2024-07-10 14:20:34.876032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.492 [2024-07-10 14:20:34.890921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.492 [2024-07-10 14:20:34.890959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.492 [2024-07-10 14:20:34.905908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.492 [2024-07-10 14:20:34.905947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.492 [2024-07-10 14:20:34.920666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.492 [2024-07-10 14:20:34.920718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.492 [2024-07-10 14:20:34.935176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.492 [2024-07-10 14:20:34.935214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.492 [2024-07-10 14:20:34.949719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.492 [2024-07-10 14:20:34.949758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.492 [2024-07-10 14:20:34.964746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.492 [2024-07-10 14:20:34.964800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.750 [2024-07-10 14:20:34.980017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.751 [2024-07-10 14:20:34.980057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.751 [2024-07-10 14:20:34.996073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.751 [2024-07-10 14:20:34.996112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.751 [2024-07-10 14:20:35.011041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.751 [2024-07-10 14:20:35.011088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.751 [2024-07-10 14:20:35.026289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.751 [2024-07-10 14:20:35.026328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.751 [2024-07-10 14:20:35.040880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.751 [2024-07-10 14:20:35.040919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.751 [2024-07-10 14:20:35.056546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.751 [2024-07-10 14:20:35.056582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.751 [2024-07-10 14:20:35.071419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.751 [2024-07-10 14:20:35.071485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.751 [2024-07-10 14:20:35.086174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.751 [2024-07-10 14:20:35.086212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.751 [2024-07-10 14:20:35.101838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.751 [2024-07-10 14:20:35.101877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.751 [2024-07-10 14:20:35.116853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.751 [2024-07-10 14:20:35.116892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.751 [2024-07-10 14:20:35.131735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.751 [2024-07-10 14:20:35.131774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.751 [2024-07-10 14:20:35.146362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.751 [2024-07-10 14:20:35.146401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.751 [2024-07-10 14:20:35.161547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.751 [2024-07-10 14:20:35.161582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.751 [2024-07-10 14:20:35.177061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.751 [2024-07-10 14:20:35.177101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.751 [2024-07-10 14:20:35.192385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.751 [2024-07-10 14:20:35.192441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.751 [2024-07-10 14:20:35.207483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.751 [2024-07-10 14:20:35.207519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.751 [2024-07-10 14:20:35.222309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.751 [2024-07-10 14:20:35.222347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.009 [2024-07-10 14:20:35.237187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.009 [2024-07-10 14:20:35.237226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.009 [2024-07-10 14:20:35.252319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.009 [2024-07-10 14:20:35.252358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.009 [2024-07-10 14:20:35.267222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.009 [2024-07-10 14:20:35.267261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.009 [2024-07-10 14:20:35.282055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.009 [2024-07-10 14:20:35.282095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.009 [2024-07-10 14:20:35.297152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.009 [2024-07-10 14:20:35.297200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.009 [2024-07-10 14:20:35.312493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.009 [2024-07-10 14:20:35.312529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.009 [2024-07-10 14:20:35.327482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.009 [2024-07-10 14:20:35.327517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.009 [2024-07-10 14:20:35.342356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.009 [2024-07-10 14:20:35.342395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.009 [2024-07-10 14:20:35.357669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.009 [2024-07-10 14:20:35.357720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.009 [2024-07-10 14:20:35.372528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.009 [2024-07-10 14:20:35.372563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.009 [2024-07-10 14:20:35.387349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.009 [2024-07-10 14:20:35.387388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.009 [2024-07-10 14:20:35.402330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.009 [2024-07-10 14:20:35.402370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.009 [2024-07-10 14:20:35.417121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.009 [2024-07-10 14:20:35.417160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.009 [2024-07-10 14:20:35.432832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.009 [2024-07-10 14:20:35.432871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.009 [2024-07-10 14:20:35.448870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.009 [2024-07-10 14:20:35.448910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.009 [2024-07-10 14:20:35.463817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.009 [2024-07-10 14:20:35.463857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.009 [2024-07-10 14:20:35.479270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.009 [2024-07-10 14:20:35.479310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.268 [2024-07-10 14:20:35.494596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.268 [2024-07-10 14:20:35.494633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.268 [2024-07-10 14:20:35.509632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.268 [2024-07-10 14:20:35.509667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.268 [2024-07-10 14:20:35.525089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.268 [2024-07-10 14:20:35.525130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.268 [2024-07-10 14:20:35.540628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.268 [2024-07-10 14:20:35.540663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.268 [2024-07-10 14:20:35.555911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.268 [2024-07-10 14:20:35.555950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.268 [2024-07-10 14:20:35.568262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.268 [2024-07-10 14:20:35.568302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.268 [2024-07-10 14:20:35.582220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.268 [2024-07-10 14:20:35.582260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.268 [2024-07-10 14:20:35.597648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.268 [2024-07-10 14:20:35.597684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.268 [2024-07-10 14:20:35.612353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.268 [2024-07-10 14:20:35.612393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.268 [2024-07-10 14:20:35.627842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.268 [2024-07-10 14:20:35.627882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.268 [2024-07-10 14:20:35.642616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.268 [2024-07-10 14:20:35.642652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.268 [2024-07-10 14:20:35.658099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.268 [2024-07-10 14:20:35.658138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.268 [2024-07-10 14:20:35.673060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.268 [2024-07-10 14:20:35.673098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.268 [2024-07-10 14:20:35.688597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.268 [2024-07-10 14:20:35.688632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.268 [2024-07-10 14:20:35.703453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.268 [2024-07-10 14:20:35.703505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.268 [2024-07-10 14:20:35.718619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.268 [2024-07-10 14:20:35.718655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.268 [2024-07-10 14:20:35.731831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.268 [2024-07-10 14:20:35.731872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.268 [2024-07-10 14:20:35.747022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.268 [2024-07-10 14:20:35.747061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.526 [2024-07-10 14:20:35.762340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.526 [2024-07-10 14:20:35.762379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.526 [2024-07-10 14:20:35.777726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.526 [2024-07-10 14:20:35.777765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.526 [2024-07-10 14:20:35.793050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.526 [2024-07-10 14:20:35.793090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.526 [2024-07-10 14:20:35.808267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.526 [2024-07-10 14:20:35.808306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.526 [2024-07-10 14:20:35.823140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.526 [2024-07-10 14:20:35.823179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.526 [2024-07-10 14:20:35.838571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.526 [2024-07-10 14:20:35.838607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.526 [2024-07-10 14:20:35.851310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.526 [2024-07-10 14:20:35.851349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.526 [2024-07-10 14:20:35.866089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.526 [2024-07-10 14:20:35.866128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.526 [2024-07-10 14:20:35.880543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.526 [2024-07-10 14:20:35.880578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.526 [2024-07-10 14:20:35.895652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.526 [2024-07-10 14:20:35.895688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.526 [2024-07-10 14:20:35.910262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.526 [2024-07-10 14:20:35.910301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.526 [2024-07-10 14:20:35.925442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.526 [2024-07-10 14:20:35.925480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.526 [2024-07-10 14:20:35.940369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.526 [2024-07-10 14:20:35.940407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.527 [2024-07-10 14:20:35.955726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.527 [2024-07-10 14:20:35.955766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.527 [2024-07-10 14:20:35.971060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.527 [2024-07-10 14:20:35.971099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.527 [2024-07-10 14:20:35.985685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.527 [2024-07-10 14:20:35.985739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.527 [2024-07-10 14:20:36.001031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.527 [2024-07-10 14:20:36.001070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.785 [2024-07-10 14:20:36.016388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.785 [2024-07-10 14:20:36.016435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.785 [2024-07-10 14:20:36.031450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.785 [2024-07-10 14:20:36.031503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.785 [2024-07-10 14:20:36.047276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.785 [2024-07-10 14:20:36.047316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.785 [2024-07-10 14:20:36.062651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.785 [2024-07-10 14:20:36.062686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.785 [2024-07-10 14:20:36.078157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.785 [2024-07-10 14:20:36.078197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.785 [2024-07-10 14:20:36.093320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.785 [2024-07-10 14:20:36.093359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.785 [2024-07-10 14:20:36.108368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.785 [2024-07-10 14:20:36.108408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.785 [2024-07-10 14:20:36.122757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.785 [2024-07-10 14:20:36.122796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.785 [2024-07-10 14:20:36.137262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.785 [2024-07-10 14:20:36.137301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.786 [2024-07-10 14:20:36.152289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.786 [2024-07-10 14:20:36.152328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.786 [2024-07-10 14:20:36.167000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.786 [2024-07-10 14:20:36.167039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.786 [2024-07-10 14:20:36.181957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.786 [2024-07-10 14:20:36.181996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.786 [2024-07-10 14:20:36.197317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.786 [2024-07-10 14:20:36.197355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.786 [2024-07-10 14:20:36.212235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.786 [2024-07-10 14:20:36.212273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.786 [2024-07-10 14:20:36.227039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.786 [2024-07-10 14:20:36.227078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.786 [2024-07-10 14:20:36.241901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.786 [2024-07-10 14:20:36.241940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.786 [2024-07-10 14:20:36.256054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.786 [2024-07-10 14:20:36.256093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.044 [2024-07-10 14:20:36.272534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.044 [2024-07-10 14:20:36.272569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.044 [2024-07-10 14:20:36.287132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.044 [2024-07-10 14:20:36.287170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.045 [2024-07-10 14:20:36.302228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.045 [2024-07-10 14:20:36.302266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.045 [2024-07-10 14:20:36.317671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.045 [2024-07-10 14:20:36.317725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.045 [2024-07-10 14:20:36.332958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.045 [2024-07-10 14:20:36.332998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.045 [2024-07-10 14:20:36.348249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.045 [2024-07-10 14:20:36.348289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.045 [2024-07-10 14:20:36.363351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.045 [2024-07-10 14:20:36.363390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.045 [2024-07-10 14:20:36.378417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.045 [2024-07-10 14:20:36.378464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.045 [2024-07-10 14:20:36.393567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.045 [2024-07-10 14:20:36.393603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.045 [2024-07-10 14:20:36.409049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.045 [2024-07-10 14:20:36.409089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.045 [2024-07-10 14:20:36.423671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.045 [2024-07-10 14:20:36.423725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.045 [2024-07-10 14:20:36.438978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.045 [2024-07-10 14:20:36.439018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.045 [2024-07-10 14:20:36.454632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.045 [2024-07-10 14:20:36.454667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.045 [2024-07-10 14:20:36.469905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.045 [2024-07-10 14:20:36.469944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.045 [2024-07-10 14:20:36.485097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.045 [2024-07-10 14:20:36.485136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.045 [2024-07-10 14:20:36.500592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.045 [2024-07-10 14:20:36.500628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.045 [2024-07-10 14:20:36.515597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.045 [2024-07-10 14:20:36.515632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.303 [2024-07-10 14:20:36.531125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.303 [2024-07-10 14:20:36.531165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.303 [2024-07-10 14:20:36.545902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.303 [2024-07-10 14:20:36.545941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.303 [2024-07-10 14:20:36.560776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.303 [2024-07-10 14:20:36.560815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.303 [2024-07-10 14:20:36.575930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.303 [2024-07-10 14:20:36.575969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.303 [2024-07-10 14:20:36.590370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.303 [2024-07-10 14:20:36.590420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.303 [2024-07-10 14:20:36.606134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.303 [2024-07-10 14:20:36.606173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.303 [2024-07-10 14:20:36.622112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.303 [2024-07-10 14:20:36.622151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.303 [2024-07-10 14:20:36.637171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.303 [2024-07-10 14:20:36.637210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.303 [2024-07-10 14:20:36.652005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.303 [2024-07-10 14:20:36.652045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.303 [2024-07-10 14:20:36.667302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.303 [2024-07-10 14:20:36.667341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.303 [2024-07-10 14:20:36.682966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.303 [2024-07-10 14:20:36.683005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.303 [2024-07-10 14:20:36.697700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.303 [2024-07-10 14:20:36.697754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.303 [2024-07-10 14:20:36.712194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.303 [2024-07-10 14:20:36.712241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.303 [2024-07-10 14:20:36.727574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.303 [2024-07-10 14:20:36.727610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.303 [2024-07-10 14:20:36.742643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.303 [2024-07-10 14:20:36.742679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.303 [2024-07-10 14:20:36.757847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.303 [2024-07-10 14:20:36.757886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.303 [2024-07-10 14:20:36.772134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.303 [2024-07-10 14:20:36.772174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.562 [2024-07-10 14:20:36.786403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.562 [2024-07-10 14:20:36.786466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.562 [2024-07-10 14:20:36.801957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.562 [2024-07-10 14:20:36.801996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.562 [2024-07-10 14:20:36.817017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.562 [2024-07-10 14:20:36.817056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.562 [2024-07-10 14:20:36.831596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.562 [2024-07-10 14:20:36.831631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.562 [2024-07-10 14:20:36.846445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.562 [2024-07-10 14:20:36.846497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.562 [2024-07-10 14:20:36.861128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.562 [2024-07-10 14:20:36.861167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.562 [2024-07-10 14:20:36.875339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.562 [2024-07-10 14:20:36.875378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.562 [2024-07-10 14:20:36.889584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.562 [2024-07-10 14:20:36.889619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.562 [2024-07-10 14:20:36.904643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.562 [2024-07-10 14:20:36.904678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.562 [2024-07-10 14:20:36.919636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.562 [2024-07-10 14:20:36.919673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.562 [2024-07-10 14:20:36.934769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.562 [2024-07-10 14:20:36.934808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.562 [2024-07-10 14:20:36.949913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.562 [2024-07-10 14:20:36.949952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.562 [2024-07-10 14:20:36.964914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.562 [2024-07-10 14:20:36.964953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.562 [2024-07-10 14:20:36.980408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.562 [2024-07-10 14:20:36.980456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.562 [2024-07-10 14:20:36.995408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.562 [2024-07-10 14:20:36.995464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.562 [2024-07-10 14:20:37.010383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.562 [2024-07-10 14:20:37.010422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.562 [2024-07-10 14:20:37.024964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.562 [2024-07-10 14:20:37.025004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.562 [2024-07-10 14:20:37.040786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.562 [2024-07-10 14:20:37.040826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.820 [2024-07-10 14:20:37.056784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.820 [2024-07-10 14:20:37.056824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.820 [2024-07-10 14:20:37.071758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.820 [2024-07-10 14:20:37.071797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.820 [2024-07-10 14:20:37.086649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.820 [2024-07-10 14:20:37.086684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.820 [2024-07-10 14:20:37.100944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.820 [2024-07-10 14:20:37.100983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.820 [2024-07-10 14:20:37.115960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.820 [2024-07-10 14:20:37.115999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.820 [2024-07-10 14:20:37.130861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.820 [2024-07-10 14:20:37.130901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.821 [2024-07-10 14:20:37.145815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.821 [2024-07-10 14:20:37.145855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.821 [2024-07-10 14:20:37.160748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.821 [2024-07-10 14:20:37.160787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.821 [2024-07-10 14:20:37.175526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.821 [2024-07-10 14:20:37.175562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.821 [2024-07-10 14:20:37.190407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.821 [2024-07-10 14:20:37.190455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.821 [2024-07-10 14:20:37.204775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.821 [2024-07-10 14:20:37.204813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.821 [2024-07-10 14:20:37.220524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.821 [2024-07-10 14:20:37.220575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.821 [2024-07-10 14:20:37.235238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.821 [2024-07-10 14:20:37.235276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.821 [2024-07-10 14:20:37.249771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.821 [2024-07-10 14:20:37.249810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.821 [2024-07-10 14:20:37.264223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.821 [2024-07-10 14:20:37.264261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.821 [2024-07-10 14:20:37.278979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.821 [2024-07-10 14:20:37.279025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.821 [2024-07-10 14:20:37.294140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.821 [2024-07-10 14:20:37.294179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.079 [2024-07-10 14:20:37.309410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.079 [2024-07-10 14:20:37.309473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.079 [2024-07-10 14:20:37.324602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.079 [2024-07-10 14:20:37.324637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.079 [2024-07-10 14:20:37.339574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.079 [2024-07-10 14:20:37.339610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.079 [2024-07-10 14:20:37.355232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.079 [2024-07-10 14:20:37.355271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.079 [2024-07-10 14:20:37.370609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.079 [2024-07-10 14:20:37.370644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.079 [2024-07-10 14:20:37.385515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.079 [2024-07-10 14:20:37.385551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.079 [2024-07-10 14:20:37.397548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.079 [2024-07-10 14:20:37.397585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.079 [2024-07-10 14:20:37.410926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.079 [2024-07-10 14:20:37.410965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.079 [2024-07-10 14:20:37.427042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.079 [2024-07-10 14:20:37.427080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.079 [2024-07-10 14:20:37.441902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.079 [2024-07-10 14:20:37.441941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.079 [2024-07-10 14:20:37.456583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.079 [2024-07-10 14:20:37.456618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.079 [2024-07-10 14:20:37.471227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.079 [2024-07-10 14:20:37.471266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.079 [2024-07-10 14:20:37.486673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.079 [2024-07-10 14:20:37.486728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.079 [2024-07-10 14:20:37.501204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.079 [2024-07-10 14:20:37.501244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.079 [2024-07-10 14:20:37.516021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.079 [2024-07-10 14:20:37.516060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.079 [2024-07-10 14:20:37.530737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.079 [2024-07-10 14:20:37.530776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.079 [2024-07-10 14:20:37.546281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.079 [2024-07-10 14:20:37.546320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.337 [2024-07-10 14:20:37.561531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.338 [2024-07-10 14:20:37.561576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.338 [2024-07-10 14:20:37.576785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.338 [2024-07-10 14:20:37.576825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.338 [2024-07-10 14:20:37.591332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.338 [2024-07-10 14:20:37.591371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.338 [2024-07-10 14:20:37.605560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.338 [2024-07-10 14:20:37.605595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.338 [2024-07-10 14:20:37.620145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.338 [2024-07-10 14:20:37.620184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.338 [2024-07-10 14:20:37.635114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.338 [2024-07-10 14:20:37.635153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.338 [2024-07-10 14:20:37.650407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.338 [2024-07-10 14:20:37.650459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.338 [2024-07-10 14:20:37.665005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.338 [2024-07-10 14:20:37.665044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.338 [2024-07-10 14:20:37.679524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.338 [2024-07-10 14:20:37.679559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.338 [2024-07-10 14:20:37.694190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.338 [2024-07-10 14:20:37.694230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.338 [2024-07-10 14:20:37.709114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.338 [2024-07-10 14:20:37.709153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.338 [2024-07-10 14:20:37.724091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.338 [2024-07-10 14:20:37.724130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.338 [2024-07-10 14:20:37.739303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.338 [2024-07-10 14:20:37.739342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.338 [2024-07-10 14:20:37.754653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.338 [2024-07-10 14:20:37.754690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.338 [2024-07-10 14:20:37.770233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.338 [2024-07-10 14:20:37.770273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.338 [2024-07-10 14:20:37.786110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.338 [2024-07-10 14:20:37.786150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.338 [2024-07-10 14:20:37.801182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.338 [2024-07-10 14:20:37.801222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.338 [2024-07-10 14:20:37.816926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.338 [2024-07-10 14:20:37.816966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.596 [2024-07-10 14:20:37.832913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.596 [2024-07-10 14:20:37.832953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.596 [2024-07-10 14:20:37.848209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.596 [2024-07-10 14:20:37.848248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.596 [2024-07-10 14:20:37.863701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.596 [2024-07-10 14:20:37.863759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.596 [2024-07-10 14:20:37.879143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.596 [2024-07-10 14:20:37.879182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.596 [2024-07-10 14:20:37.894964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.596 [2024-07-10 14:20:37.895003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.596 [2024-07-10 14:20:37.910877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.596 [2024-07-10 14:20:37.910916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.596 [2024-07-10 14:20:37.925939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.596 [2024-07-10 14:20:37.925979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.596 [2024-07-10 14:20:37.941009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.596 [2024-07-10 14:20:37.941048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.596 [2024-07-10 14:20:37.955791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.596 [2024-07-10 14:20:37.955829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.596 [2024-07-10 14:20:37.971016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.596 [2024-07-10 14:20:37.971055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.596 [2024-07-10 14:20:37.985785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.596 [2024-07-10 14:20:37.985825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.596 [2024-07-10 14:20:38.000957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.596 [2024-07-10 14:20:38.000996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.596 [2024-07-10 14:20:38.015985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.596 [2024-07-10 14:20:38.016023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.596 [2024-07-10 14:20:38.031317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.596 [2024-07-10 14:20:38.031357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.596 [2024-07-10 14:20:38.046687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.596 [2024-07-10 14:20:38.046745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.596 [2024-07-10 14:20:38.061337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.596 [2024-07-10 14:20:38.061376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.596 [2024-07-10 14:20:38.076333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.596 [2024-07-10 14:20:38.076372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.855 [2024-07-10 14:20:38.091599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.855 [2024-07-10 14:20:38.091634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.855 [2024-07-10 14:20:38.106659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.855 [2024-07-10 14:20:38.106694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.855 [2024-07-10 14:20:38.121338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.855 [2024-07-10 14:20:38.121377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.855 [2024-07-10 14:20:38.136598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.855 [2024-07-10 14:20:38.136633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.855 [2024-07-10 14:20:38.152284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.855 [2024-07-10 14:20:38.152324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.855 [2024-07-10 14:20:38.168081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.855 [2024-07-10 14:20:38.168120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.855 [2024-07-10 14:20:38.183174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.855 [2024-07-10 14:20:38.183213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.855 [2024-07-10 14:20:38.198326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.855 [2024-07-10 14:20:38.198365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.855 [2024-07-10 14:20:38.214023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.855 [2024-07-10 14:20:38.214061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.855 [2024-07-10 14:20:38.229102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.855 [2024-07-10 14:20:38.229142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.855 [2024-07-10 14:20:38.244826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.855 [2024-07-10 14:20:38.244866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.855 [2024-07-10 14:20:38.259177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.855 [2024-07-10 14:20:38.259216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.855 [2024-07-10 14:20:38.274272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.855 [2024-07-10 14:20:38.274311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.855 [2024-07-10 14:20:38.288684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.855 [2024-07-10 14:20:38.288738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.855 [2024-07-10 14:20:38.304305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.855 [2024-07-10 14:20:38.304345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.855 [2024-07-10 14:20:38.318623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.855 [2024-07-10 14:20:38.318659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.855 [2024-07-10 14:20:38.333447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.855 [2024-07-10 14:20:38.333498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.114 [2024-07-10 14:20:38.348687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.114 [2024-07-10 14:20:38.348743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.114 [2024-07-10 14:20:38.363531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.114 [2024-07-10 14:20:38.363566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.114 [2024-07-10 14:20:38.378729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.114 [2024-07-10 14:20:38.378768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.114 [2024-07-10 14:20:38.393833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.114 [2024-07-10 14:20:38.393872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.114 [2024-07-10 14:20:38.408767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.114 [2024-07-10 14:20:38.408805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.114 [2024-07-10 14:20:38.423332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.114 [2024-07-10 14:20:38.423371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.114 [2024-07-10 14:20:38.438073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.114 [2024-07-10 14:20:38.438113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.114 [2024-07-10 14:20:38.453176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.114 [2024-07-10 14:20:38.453214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.114 [2024-07-10 14:20:38.468343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.114 [2024-07-10 14:20:38.468381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.114 [2024-07-10 14:20:38.483242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.114 [2024-07-10 14:20:38.483281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.114 [2024-07-10 14:20:38.498266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.114 [2024-07-10 14:20:38.498305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.114 [2024-07-10 14:20:38.513048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.114 [2024-07-10 14:20:38.513087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.114 [2024-07-10 14:20:38.528387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.114 [2024-07-10 14:20:38.528435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.114 [2024-07-10 14:20:38.543162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.114 [2024-07-10 14:20:38.543200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.114 [2024-07-10 14:20:38.558201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.114 [2024-07-10 14:20:38.558240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.114 [2024-07-10 14:20:38.572905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.114 [2024-07-10 14:20:38.572944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.114 [2024-07-10 14:20:38.587807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.114 [2024-07-10 14:20:38.587846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.373 [2024-07-10 14:20:38.603189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.373 [2024-07-10 14:20:38.603229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.373 [2024-07-10 14:20:38.617848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.373 [2024-07-10 14:20:38.617888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.373 [2024-07-10 14:20:38.632839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.373 [2024-07-10 14:20:38.632879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.373 [2024-07-10 14:20:38.647217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.373 [2024-07-10 14:20:38.647257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.373 [2024-07-10 14:20:38.662479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.373 [2024-07-10 14:20:38.662515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.373 [2024-07-10 14:20:38.677582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.373 [2024-07-10 14:20:38.677617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.373 [2024-07-10 14:20:38.692680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.373 [2024-07-10 14:20:38.692733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.373 [2024-07-10 14:20:38.708391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.373 [2024-07-10 14:20:38.708446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.373 [2024-07-10 14:20:38.723949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.373 [2024-07-10 14:20:38.723988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.373 [2024-07-10 14:20:38.739559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.373 [2024-07-10 14:20:38.739594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.373 [2024-07-10 14:20:38.755309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.373 [2024-07-10 14:20:38.755358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.373 [2024-07-10 14:20:38.769749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.373 [2024-07-10 14:20:38.769793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.373 [2024-07-10 14:20:38.784494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.373 [2024-07-10 14:20:38.784529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.373 [2024-07-10 14:20:38.799082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.373 [2024-07-10 14:20:38.799122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.373 [2024-07-10 14:20:38.814607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.373 [2024-07-10 14:20:38.814642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.373 [2024-07-10 14:20:38.829397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.373 [2024-07-10 14:20:38.829448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.373 [2024-07-10 14:20:38.844055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.373 [2024-07-10 14:20:38.844094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:38.858680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:38.858734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:38.873859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:38.873899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:38.888536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:38.888572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:38.903408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:38.903458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:38.918564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:38.918600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:38.933731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:38.933771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:38.949067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:38.949106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:38.963516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:38.963551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:38.986802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:38.986855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:39.000975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:39.001014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:39.016805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:39.016844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:39.024903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:39.024940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 00:18:29.632 Latency(us) 00:18:29.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.632 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:29.632 Nvme1n1 : 5.01 8400.76 65.63 0.00 0.00 15209.84 4271.98 28738.75 00:18:29.632 =================================================================================================================== 00:18:29.632 Total : 8400.76 65.63 0.00 0.00 15209.84 4271.98 28738.75 00:18:29.632 [2024-07-10 14:20:39.033159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:39.033196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:39.041203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:39.041239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:39.049194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:39.049228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:39.057241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:39.057274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:39.065259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:39.065293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:39.073263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:39.073295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:39.081452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:39.081525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:39.089483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:39.089544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:39.097343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:39.097379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:39.105390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:39.105423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.632 [2024-07-10 14:20:39.113375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.632 [2024-07-10 14:20:39.113411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.121418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.121477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.129443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.129496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.137502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.137531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.145501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.145529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.153526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.153553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.161516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.161544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.169590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.169634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.177655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.177712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.185685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.185741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.193610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.193638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.201625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.201654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.209670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.209698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.217670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.217699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.225665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.225692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.233748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.233781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.241705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.241751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.249760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.249793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.257797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.257829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.265795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.265827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.273847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.273878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.281884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.281923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.289876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.289908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.297929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.297961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.305914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.305946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.313962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.313994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.322006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.322038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.329988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.330024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.338150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.338212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.346144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.346200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.354058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.354090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.362099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.362131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.891 [2024-07-10 14:20:39.370106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.891 [2024-07-10 14:20:39.370141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.150 [2024-07-10 14:20:39.378155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.150 [2024-07-10 14:20:39.378191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.150 [2024-07-10 14:20:39.386172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.150 [2024-07-10 14:20:39.386205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.150 [2024-07-10 14:20:39.394195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.150 [2024-07-10 14:20:39.394228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.150 [2024-07-10 14:20:39.402279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.150 [2024-07-10 14:20:39.402325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.150 [2024-07-10 14:20:39.410371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.150 [2024-07-10 14:20:39.410441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.150 [2024-07-10 14:20:39.418344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.150 [2024-07-10 14:20:39.418402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.150 [2024-07-10 14:20:39.426435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.150 [2024-07-10 14:20:39.426493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.150 [2024-07-10 14:20:39.434296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.150 [2024-07-10 14:20:39.434338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.150 [2024-07-10 14:20:39.442330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.150 [2024-07-10 14:20:39.442363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.150 [2024-07-10 14:20:39.450377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.150 [2024-07-10 14:20:39.450411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.150 [2024-07-10 14:20:39.458358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.150 [2024-07-10 14:20:39.458391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.150 [2024-07-10 14:20:39.466407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.150 [2024-07-10 14:20:39.466448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.150 [2024-07-10 14:20:39.474423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.150 [2024-07-10 14:20:39.474480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.150 [2024-07-10 14:20:39.482435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.150 [2024-07-10 14:20:39.482467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.150 [2024-07-10 14:20:39.490489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.150 [2024-07-10 14:20:39.490518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.150 [2024-07-10 14:20:39.498492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.150 [2024-07-10 14:20:39.498521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.150 [2024-07-10 14:20:39.506531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.150 [2024-07-10 14:20:39.506559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.150 [2024-07-10 14:20:39.514553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.150 [2024-07-10 14:20:39.514583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.150 [2024-07-10 14:20:39.522554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.150 [2024-07-10 14:20:39.522583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.150 [2024-07-10 14:20:39.530590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.150 [2024-07-10 14:20:39.530620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.150 [2024-07-10 14:20:39.538597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.151 [2024-07-10 14:20:39.538626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.151 [2024-07-10 14:20:39.546615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.151 [2024-07-10 14:20:39.546644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.151 [2024-07-10 14:20:39.554654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.151 [2024-07-10 14:20:39.554682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.151 [2024-07-10 14:20:39.562658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.151 [2024-07-10 14:20:39.562686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.151 [2024-07-10 14:20:39.570718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.151 [2024-07-10 14:20:39.570744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.151 [2024-07-10 14:20:39.578740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.151 [2024-07-10 14:20:39.578783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.151 [2024-07-10 14:20:39.586773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.151 [2024-07-10 14:20:39.586805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.151 [2024-07-10 14:20:39.594764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.151 [2024-07-10 14:20:39.594793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.151 [2024-07-10 14:20:39.602911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.151 [2024-07-10 14:20:39.602967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.151 [2024-07-10 14:20:39.610914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.151 [2024-07-10 14:20:39.610968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.151 [2024-07-10 14:20:39.618885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.151 [2024-07-10 14:20:39.618918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.151 [2024-07-10 14:20:39.626871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.151 [2024-07-10 14:20:39.626903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.409 [2024-07-10 14:20:39.634956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.409 [2024-07-10 14:20:39.635000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.409 [2024-07-10 14:20:39.642937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.409 [2024-07-10 14:20:39.642972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.409 [2024-07-10 14:20:39.650950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.409 [2024-07-10 14:20:39.650983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.409 [2024-07-10 14:20:39.658986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.409 [2024-07-10 14:20:39.659019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.409 [2024-07-10 14:20:39.667014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.409 [2024-07-10 14:20:39.667047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.409 [2024-07-10 14:20:39.675011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.409 [2024-07-10 14:20:39.675043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.409 [2024-07-10 14:20:39.683073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.409 [2024-07-10 14:20:39.683105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.409 [2024-07-10 14:20:39.691057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.409 [2024-07-10 14:20:39.691091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.409 [2024-07-10 14:20:39.699096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.409 [2024-07-10 14:20:39.699128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.409 [2024-07-10 14:20:39.707119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.409 [2024-07-10 14:20:39.707152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.409 [2024-07-10 14:20:39.715127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.409 [2024-07-10 14:20:39.715159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.409 [2024-07-10 14:20:39.723177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.409 [2024-07-10 14:20:39.723209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.409 [2024-07-10 14:20:39.731198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.409 [2024-07-10 14:20:39.731231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.410 [2024-07-10 14:20:39.739293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.410 [2024-07-10 14:20:39.739348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.410 [2024-07-10 14:20:39.747271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.410 [2024-07-10 14:20:39.747305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.410 [2024-07-10 14:20:39.755241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.410 [2024-07-10 14:20:39.755273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.410 [2024-07-10 14:20:39.763288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.410 [2024-07-10 14:20:39.763319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.410 [2024-07-10 14:20:39.771279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.410 [2024-07-10 14:20:39.771306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.410 [2024-07-10 14:20:39.779341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.410 [2024-07-10 14:20:39.779374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.410 [2024-07-10 14:20:39.787351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.410 [2024-07-10 14:20:39.787383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.410 [2024-07-10 14:20:39.795382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.410 [2024-07-10 14:20:39.795414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.410 [2024-07-10 14:20:39.803385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.410 [2024-07-10 14:20:39.803416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.410 [2024-07-10 14:20:39.811433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.410 [2024-07-10 14:20:39.811465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.410 [2024-07-10 14:20:39.819443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.410 [2024-07-10 14:20:39.819489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.410 [2024-07-10 14:20:39.827498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.410 [2024-07-10 14:20:39.827526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.410 [2024-07-10 14:20:39.835541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.410 [2024-07-10 14:20:39.835580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.410 [2024-07-10 14:20:39.843616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.410 [2024-07-10 14:20:39.843670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.410 [2024-07-10 14:20:39.851562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.410 [2024-07-10 14:20:39.851591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.410 [2024-07-10 14:20:39.859566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.410 [2024-07-10 14:20:39.859593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.410 [2024-07-10 14:20:39.867574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.410 [2024-07-10 14:20:39.867602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.410 [2024-07-10 14:20:39.875637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.410 [2024-07-10 14:20:39.875666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.410 [2024-07-10 14:20:39.883603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.410 [2024-07-10 14:20:39.883630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.668 [2024-07-10 14:20:39.891662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.668 [2024-07-10 14:20:39.891695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:39.899670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:39.899716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:39.907670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:39.907699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:39.915722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:39.915750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:39.923756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:39.923799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:39.931754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:39.931787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:39.939815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:39.939848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:39.947817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:39.947851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:39.955908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:39.955955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:39.963943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:39.963992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:39.971917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:39.971950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:39.979938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:39.979971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:39.987944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:39.987976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:39.995936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:39.995969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:40.003976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:40.004009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:40.012089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:40.012148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:40.020050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:40.020088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:40.028072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:40.028117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:40.036062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:40.036107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:40.044107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:40.044154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:40.052128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:40.052164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:40.060124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:40.060160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:40.068193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:40.068228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:40.076188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:40.076231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:40.084220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:40.084254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:40.092240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:40.092275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:40.100241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:40.100275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:40.108289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:40.108323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:40.116300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:40.116335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:40.124305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:40.124339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:40.132352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:40.132386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:40.140351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:40.140385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.669 [2024-07-10 14:20:40.148421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.669 [2024-07-10 14:20:40.148488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.928 [2024-07-10 14:20:40.156479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.928 [2024-07-10 14:20:40.156513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.928 [2024-07-10 14:20:40.164488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.928 [2024-07-10 14:20:40.164518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.928 [2024-07-10 14:20:40.172516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.928 [2024-07-10 14:20:40.172547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.928 [2024-07-10 14:20:40.180513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.928 [2024-07-10 14:20:40.180542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1375569) - No such process 00:18:30.928 14:20:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1375569 00:18:30.928 14:20:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:30.928 14:20:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.928 14:20:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:30.928 14:20:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.928 14:20:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:30.928 14:20:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.928 14:20:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:30.928 delay0 00:18:30.928 14:20:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.928 14:20:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:30.928 14:20:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.928 14:20:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:30.928 14:20:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.928 14:20:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:30.928 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.928 [2024-07-10 14:20:40.345645] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:39.032 Initializing NVMe Controllers 00:18:39.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:39.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:39.032 Initialization complete. Launching workers. 00:18:39.032 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 261, failed: 11522 00:18:39.032 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 11694, failed to submit 89 00:18:39.032 success 11588, unsuccess 106, failed 0 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:39.032 rmmod nvme_tcp 00:18:39.032 rmmod nvme_fabrics 00:18:39.032 rmmod nvme_keyring 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1374092 ']' 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1374092 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1374092 ']' 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1374092 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1374092 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1374092' 00:18:39.032 killing process with pid 1374092 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1374092 00:18:39.032 14:20:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1374092 00:18:39.965 14:20:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:39.965 14:20:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:39.965 14:20:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:39.965 14:20:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:39.965 14:20:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:39.965 14:20:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.965 14:20:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.965 14:20:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.866 14:20:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:41.866 00:18:41.866 real 0m33.501s 00:18:41.866 user 0m50.367s 00:18:41.866 sys 0m9.226s 00:18:41.866 14:20:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:41.866 14:20:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:41.866 ************************************ 00:18:41.866 END TEST nvmf_zcopy 00:18:41.866 ************************************ 00:18:41.866 14:20:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:41.866 14:20:51 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:41.866 14:20:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:41.866 14:20:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:41.866 14:20:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:41.866 ************************************ 00:18:41.866 START TEST nvmf_nmic 00:18:41.866 ************************************ 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:41.866 * Looking for test storage... 00:18:41.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:41.866 14:20:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:43.767 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:43.767 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:43.767 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:43.768 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:43.768 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:43.768 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:44.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:44.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:18:44.025 00:18:44.025 --- 10.0.0.2 ping statistics --- 00:18:44.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.025 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:44.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:44.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:18:44.025 00:18:44.025 --- 10.0.0.1 ping statistics --- 00:18:44.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.025 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1379342 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1379342 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1379342 ']' 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:44.025 14:20:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:44.025 [2024-07-10 14:20:53.426989] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:18:44.025 [2024-07-10 14:20:53.427127] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.025 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.283 [2024-07-10 14:20:53.560352] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:44.541 [2024-07-10 14:20:53.819245] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.541 [2024-07-10 14:20:53.819316] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.541 [2024-07-10 14:20:53.819345] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.541 [2024-07-10 14:20:53.819366] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.541 [2024-07-10 14:20:53.819389] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.541 [2024-07-10 14:20:53.819508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.541 [2024-07-10 14:20:53.819577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.541 [2024-07-10 14:20:53.819622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.541 [2024-07-10 14:20:53.819633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.105 [2024-07-10 14:20:54.386819] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.105 Malloc0 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.105 [2024-07-10 14:20:54.492273] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:45.105 test case1: single bdev can't be used in multiple subsystems 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.105 14:20:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:45.106 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.106 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.106 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.106 14:20:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:45.106 14:20:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:45.106 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.106 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.106 [2024-07-10 14:20:54.516093] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:45.106 [2024-07-10 14:20:54.516152] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:45.106 [2024-07-10 14:20:54.516178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.106 request: 00:18:45.106 { 00:18:45.106 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:45.106 "namespace": { 00:18:45.106 "bdev_name": "Malloc0", 00:18:45.106 "no_auto_visible": false 00:18:45.106 }, 00:18:45.106 "method": "nvmf_subsystem_add_ns", 00:18:45.106 "req_id": 1 00:18:45.106 } 00:18:45.106 Got JSON-RPC error response 00:18:45.106 response: 00:18:45.106 { 00:18:45.106 "code": -32602, 00:18:45.106 "message": "Invalid parameters" 00:18:45.106 } 00:18:45.106 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:45.106 14:20:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:45.106 14:20:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:45.106 14:20:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:45.106 Adding namespace failed - expected result. 00:18:45.106 14:20:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:45.106 test case2: host connect to nvmf target in multiple paths 00:18:45.106 14:20:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:45.106 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.106 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.106 [2024-07-10 14:20:54.524210] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:45.106 14:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.106 14:20:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:46.039 14:20:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:46.605 14:20:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:46.605 14:20:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:18:46.605 14:20:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:46.605 14:20:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:46.606 14:20:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:18:48.503 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:48.503 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:48.503 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:48.503 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:48.503 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:48.503 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:18:48.503 14:20:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:48.503 [global] 00:18:48.503 thread=1 00:18:48.503 invalidate=1 00:18:48.503 rw=write 00:18:48.503 time_based=1 00:18:48.503 runtime=1 00:18:48.503 ioengine=libaio 00:18:48.503 direct=1 00:18:48.503 bs=4096 00:18:48.503 iodepth=1 00:18:48.503 norandommap=0 00:18:48.503 numjobs=1 00:18:48.503 00:18:48.503 verify_dump=1 00:18:48.503 verify_backlog=512 00:18:48.503 verify_state_save=0 00:18:48.503 do_verify=1 00:18:48.503 verify=crc32c-intel 00:18:48.503 [job0] 00:18:48.503 filename=/dev/nvme0n1 00:18:48.503 Could not set queue depth (nvme0n1) 00:18:48.760 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:48.760 fio-3.35 00:18:48.760 Starting 1 thread 00:18:50.161 00:18:50.161 job0: (groupid=0, jobs=1): err= 0: pid=1379984: Wed Jul 10 14:20:59 2024 00:18:50.161 read: IOPS=20, BW=81.8KiB/s (83.8kB/s)(84.0KiB/1027msec) 00:18:50.161 slat (nsec): min=9686, max=34875, avg=18734.48, stdev=9591.68 00:18:50.161 clat (usec): min=40917, max=41049, avg=40979.28, stdev=33.14 00:18:50.161 lat (usec): min=40952, max=41059, avg=40998.01, stdev=28.72 00:18:50.161 clat percentiles (usec): 00:18:50.161 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:50.161 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:50.161 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:50.161 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:50.161 | 99.99th=[41157] 00:18:50.161 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:18:50.161 slat (usec): min=8, max=34391, avg=79.38, stdev=1519.39 00:18:50.161 clat (usec): min=214, max=381, avg=241.63, stdev=18.97 00:18:50.161 lat (usec): min=224, max=34773, avg=321.01, stdev=1525.69 00:18:50.161 clat percentiles (usec): 00:18:50.161 | 1.00th=[ 219], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 229], 00:18:50.161 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 243], 00:18:50.161 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 281], 00:18:50.161 | 99.00th=[ 306], 99.50th=[ 338], 99.90th=[ 383], 99.95th=[ 383], 00:18:50.161 | 99.99th=[ 383] 00:18:50.161 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:50.161 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:50.161 lat (usec) : 250=75.42%, 500=20.64% 00:18:50.161 lat (msec) : 50=3.94% 00:18:50.161 cpu : usr=0.39%, sys=0.78%, ctx=536, majf=0, minf=2 00:18:50.161 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:50.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.161 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.161 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:50.161 00:18:50.161 Run status group 0 (all jobs): 00:18:50.161 READ: bw=81.8KiB/s (83.8kB/s), 81.8KiB/s-81.8KiB/s (83.8kB/s-83.8kB/s), io=84.0KiB (86.0kB), run=1027-1027msec 00:18:50.161 WRITE: bw=1994KiB/s (2042kB/s), 1994KiB/s-1994KiB/s (2042kB/s-2042kB/s), io=2048KiB (2097kB), run=1027-1027msec 00:18:50.161 00:18:50.161 Disk stats (read/write): 00:18:50.161 nvme0n1: ios=43/512, merge=0/0, ticks=1688/116, in_queue=1804, util=98.80% 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:50.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:50.161 rmmod nvme_tcp 00:18:50.161 rmmod nvme_fabrics 00:18:50.161 rmmod nvme_keyring 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1379342 ']' 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1379342 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1379342 ']' 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1379342 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1379342 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1379342' 00:18:50.161 killing process with pid 1379342 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1379342 00:18:50.161 14:20:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1379342 00:18:51.534 14:21:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:51.534 14:21:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:51.534 14:21:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:51.534 14:21:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:51.534 14:21:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:51.534 14:21:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.534 14:21:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.534 14:21:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.063 14:21:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:54.063 00:18:54.063 real 0m11.857s 00:18:54.063 user 0m27.967s 00:18:54.063 sys 0m2.522s 00:18:54.063 14:21:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:54.063 14:21:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:54.063 ************************************ 00:18:54.063 END TEST nvmf_nmic 00:18:54.063 ************************************ 00:18:54.063 14:21:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:54.063 14:21:03 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:54.063 14:21:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:54.063 14:21:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:54.063 14:21:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:54.063 ************************************ 00:18:54.063 START TEST nvmf_fio_target 00:18:54.063 ************************************ 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:54.063 * Looking for test storage... 00:18:54.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.063 14:21:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:54.064 14:21:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.064 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:54.064 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:54.064 14:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:54.064 14:21:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.960 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:55.960 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:55.960 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:55.960 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:55.960 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:55.960 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:55.960 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:55.960 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:55.960 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:55.960 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:55.960 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:55.960 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:55.960 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:55.960 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:55.960 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:55.960 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:55.961 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:55.961 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:55.961 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:55.961 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:55.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:18:55.961 00:18:55.961 --- 10.0.0.2 ping statistics --- 00:18:55.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.961 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:55.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:18:55.961 00:18:55.961 --- 10.0.0.1 ping statistics --- 00:18:55.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.961 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1382422 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1382422 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1382422 ']' 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.961 14:21:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.961 [2024-07-10 14:21:05.406924] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:18:55.961 [2024-07-10 14:21:05.407066] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.220 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.220 [2024-07-10 14:21:05.549794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:56.477 [2024-07-10 14:21:05.826693] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.477 [2024-07-10 14:21:05.826772] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.477 [2024-07-10 14:21:05.826801] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.478 [2024-07-10 14:21:05.826821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.478 [2024-07-10 14:21:05.826846] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.478 [2024-07-10 14:21:05.826985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.478 [2024-07-10 14:21:05.827035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.478 [2024-07-10 14:21:05.827083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.478 [2024-07-10 14:21:05.827092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:57.042 14:21:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:57.043 14:21:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:18:57.043 14:21:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:57.043 14:21:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:57.043 14:21:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.043 14:21:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.043 14:21:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:57.300 [2024-07-10 14:21:06.634935] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.300 14:21:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:57.558 14:21:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:57.558 14:21:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:58.123 14:21:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:58.123 14:21:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:58.380 14:21:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:58.380 14:21:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:58.638 14:21:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:58.638 14:21:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:58.932 14:21:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.212 14:21:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:59.212 14:21:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.470 14:21:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:59.470 14:21:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.727 14:21:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:59.727 14:21:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:59.985 14:21:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:00.243 14:21:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:00.243 14:21:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:00.501 14:21:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:00.501 14:21:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:00.758 14:21:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.016 [2024-07-10 14:21:10.425262] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.016 14:21:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:01.274 14:21:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:01.532 14:21:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:02.097 14:21:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:02.097 14:21:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:19:02.097 14:21:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.097 14:21:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:19:02.097 14:21:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:19:02.097 14:21:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:19:04.624 14:21:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:04.624 14:21:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:04.624 14:21:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:04.624 14:21:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:19:04.624 14:21:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.624 14:21:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:19:04.624 14:21:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:04.624 [global] 00:19:04.624 thread=1 00:19:04.624 invalidate=1 00:19:04.624 rw=write 00:19:04.624 time_based=1 00:19:04.624 runtime=1 00:19:04.624 ioengine=libaio 00:19:04.624 direct=1 00:19:04.624 bs=4096 00:19:04.624 iodepth=1 00:19:04.624 norandommap=0 00:19:04.624 numjobs=1 00:19:04.624 00:19:04.624 verify_dump=1 00:19:04.624 verify_backlog=512 00:19:04.624 verify_state_save=0 00:19:04.624 do_verify=1 00:19:04.624 verify=crc32c-intel 00:19:04.624 [job0] 00:19:04.624 filename=/dev/nvme0n1 00:19:04.624 [job1] 00:19:04.624 filename=/dev/nvme0n2 00:19:04.624 [job2] 00:19:04.624 filename=/dev/nvme0n3 00:19:04.624 [job3] 00:19:04.624 filename=/dev/nvme0n4 00:19:04.624 Could not set queue depth (nvme0n1) 00:19:04.624 Could not set queue depth (nvme0n2) 00:19:04.624 Could not set queue depth (nvme0n3) 00:19:04.624 Could not set queue depth (nvme0n4) 00:19:04.624 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:04.624 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:04.624 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:04.624 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:04.624 fio-3.35 00:19:04.624 Starting 4 threads 00:19:05.557 00:19:05.557 job0: (groupid=0, jobs=1): err= 0: pid=1384014: Wed Jul 10 14:21:15 2024 00:19:05.557 read: IOPS=394, BW=1577KiB/s (1615kB/s)(1620KiB/1027msec) 00:19:05.557 slat (nsec): min=6206, max=68189, avg=24758.48, stdev=9715.09 00:19:05.557 clat (usec): min=376, max=41992, avg=2079.95, stdev=7900.81 00:19:05.557 lat (usec): min=391, max=42003, avg=2104.71, stdev=7899.45 00:19:05.557 clat percentiles (usec): 00:19:05.557 | 1.00th=[ 392], 5.00th=[ 408], 10.00th=[ 429], 20.00th=[ 445], 00:19:05.557 | 30.00th=[ 453], 40.00th=[ 461], 50.00th=[ 474], 60.00th=[ 486], 00:19:05.557 | 70.00th=[ 515], 80.00th=[ 537], 90.00th=[ 562], 95.00th=[ 594], 00:19:05.557 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:05.557 | 99.99th=[42206] 00:19:05.557 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:19:05.557 slat (nsec): min=6954, max=55257, avg=16231.72, stdev=8111.94 00:19:05.557 clat (usec): min=261, max=504, avg=313.80, stdev=39.04 00:19:05.557 lat (usec): min=273, max=525, avg=330.04, stdev=41.45 00:19:05.557 clat percentiles (usec): 00:19:05.557 | 1.00th=[ 269], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 285], 00:19:05.557 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 318], 00:19:05.557 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 367], 95.00th=[ 404], 00:19:05.557 | 99.00th=[ 449], 99.50th=[ 482], 99.90th=[ 506], 99.95th=[ 506], 00:19:05.557 | 99.99th=[ 506] 00:19:05.557 bw ( KiB/s): min= 4096, max= 4096, per=25.23%, avg=4096.00, stdev= 0.00, samples=1 00:19:05.557 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:05.557 lat (usec) : 500=84.84%, 750=13.41% 00:19:05.557 lat (msec) : 50=1.74% 00:19:05.557 cpu : usr=1.07%, sys=1.66%, ctx=917, majf=0, minf=1 00:19:05.557 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:05.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.558 issued rwts: total=405,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.558 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:05.558 job1: (groupid=0, jobs=1): err= 0: pid=1384015: Wed Jul 10 14:21:15 2024 00:19:05.558 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:05.558 slat (nsec): min=6520, max=75428, avg=24157.08, stdev=9757.69 00:19:05.558 clat (usec): min=398, max=41457, avg=603.55, stdev=2191.34 00:19:05.558 lat (usec): min=419, max=41489, avg=627.71, stdev=2191.52 00:19:05.558 clat percentiles (usec): 00:19:05.558 | 1.00th=[ 412], 5.00th=[ 437], 10.00th=[ 445], 20.00th=[ 457], 00:19:05.558 | 30.00th=[ 461], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 478], 00:19:05.558 | 70.00th=[ 486], 80.00th=[ 506], 90.00th=[ 562], 95.00th=[ 594], 00:19:05.558 | 99.00th=[ 652], 99.50th=[ 758], 99.90th=[41157], 99.95th=[41681], 00:19:05.558 | 99.99th=[41681] 00:19:05.558 write: IOPS=1058, BW=4236KiB/s (4337kB/s)(4240KiB/1001msec); 0 zone resets 00:19:05.558 slat (nsec): min=6156, max=75547, avg=16336.25, stdev=7951.22 00:19:05.558 clat (usec): min=246, max=615, avg=309.10, stdev=37.19 00:19:05.558 lat (usec): min=254, max=623, avg=325.44, stdev=39.21 00:19:05.558 clat percentiles (usec): 00:19:05.558 | 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 277], 00:19:05.558 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 314], 00:19:05.558 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 355], 95.00th=[ 375], 00:19:05.558 | 99.00th=[ 404], 99.50th=[ 424], 99.90th=[ 586], 99.95th=[ 619], 00:19:05.558 | 99.99th=[ 619] 00:19:05.558 bw ( KiB/s): min= 4096, max= 4096, per=25.23%, avg=4096.00, stdev= 0.00, samples=1 00:19:05.558 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:05.558 lat (usec) : 250=0.05%, 500=89.30%, 750=10.36%, 1000=0.14% 00:19:05.558 lat (msec) : 50=0.14% 00:19:05.558 cpu : usr=2.50%, sys=4.10%, ctx=2084, majf=0, minf=1 00:19:05.558 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:05.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.558 issued rwts: total=1024,1060,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.558 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:05.558 job2: (groupid=0, jobs=1): err= 0: pid=1384016: Wed Jul 10 14:21:15 2024 00:19:05.558 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:05.558 slat (nsec): min=6224, max=63649, avg=16366.53, stdev=7500.65 00:19:05.558 clat (usec): min=416, max=876, avg=505.87, stdev=45.59 00:19:05.558 lat (usec): min=423, max=885, avg=522.24, stdev=48.30 00:19:05.558 clat percentiles (usec): 00:19:05.558 | 1.00th=[ 437], 5.00th=[ 453], 10.00th=[ 461], 20.00th=[ 474], 00:19:05.558 | 30.00th=[ 482], 40.00th=[ 490], 50.00th=[ 494], 60.00th=[ 502], 00:19:05.558 | 70.00th=[ 515], 80.00th=[ 537], 90.00th=[ 570], 95.00th=[ 594], 00:19:05.558 | 99.00th=[ 627], 99.50th=[ 652], 99.90th=[ 832], 99.95th=[ 881], 00:19:05.558 | 99.99th=[ 881] 00:19:05.558 write: IOPS=1288, BW=5155KiB/s (5279kB/s)(5160KiB/1001msec); 0 zone resets 00:19:05.558 slat (nsec): min=7502, max=77536, avg=20650.01, stdev=12038.27 00:19:05.558 clat (usec): min=244, max=1035, avg=330.81, stdev=56.16 00:19:05.558 lat (usec): min=253, max=1074, avg=351.46, stdev=63.59 00:19:05.558 clat percentiles (usec): 00:19:05.558 | 1.00th=[ 258], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 285], 00:19:05.558 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 330], 00:19:05.558 | 70.00th=[ 347], 80.00th=[ 367], 90.00th=[ 408], 95.00th=[ 437], 00:19:05.558 | 99.00th=[ 478], 99.50th=[ 486], 99.90th=[ 816], 99.95th=[ 1037], 00:19:05.558 | 99.99th=[ 1037] 00:19:05.558 bw ( KiB/s): min= 5344, max= 5344, per=32.92%, avg=5344.00, stdev= 0.00, samples=1 00:19:05.558 iops : min= 1336, max= 1336, avg=1336.00, stdev= 0.00, samples=1 00:19:05.558 lat (usec) : 250=0.13%, 500=80.08%, 750=19.58%, 1000=0.17% 00:19:05.558 lat (msec) : 2=0.04% 00:19:05.558 cpu : usr=2.80%, sys=6.20%, ctx=2314, majf=0, minf=1 00:19:05.558 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:05.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.558 issued rwts: total=1024,1290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.558 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:05.558 job3: (groupid=0, jobs=1): err= 0: pid=1384017: Wed Jul 10 14:21:15 2024 00:19:05.558 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:05.558 slat (nsec): min=6546, max=81923, avg=25107.57, stdev=9922.43 00:19:05.558 clat (usec): min=396, max=1063, avg=506.50, stdev=74.04 00:19:05.558 lat (usec): min=418, max=1079, avg=531.61, stdev=74.74 00:19:05.558 clat percentiles (usec): 00:19:05.558 | 1.00th=[ 420], 5.00th=[ 441], 10.00th=[ 453], 20.00th=[ 465], 00:19:05.558 | 30.00th=[ 474], 40.00th=[ 482], 50.00th=[ 490], 60.00th=[ 498], 00:19:05.558 | 70.00th=[ 510], 80.00th=[ 529], 90.00th=[ 570], 95.00th=[ 644], 00:19:05.558 | 99.00th=[ 881], 99.50th=[ 898], 99.90th=[ 1057], 99.95th=[ 1057], 00:19:05.558 | 99.99th=[ 1057] 00:19:05.558 write: IOPS=1304, BW=5219KiB/s (5344kB/s)(5224KiB/1001msec); 0 zone resets 00:19:05.558 slat (nsec): min=5633, max=70793, avg=17303.20, stdev=8294.06 00:19:05.558 clat (usec): min=236, max=3418, avg=321.22, stdev=105.70 00:19:05.558 lat (usec): min=248, max=3436, avg=338.52, stdev=106.61 00:19:05.558 clat percentiles (usec): 00:19:05.558 | 1.00th=[ 243], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 258], 00:19:05.558 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 318], 60.00th=[ 334], 00:19:05.558 | 70.00th=[ 355], 80.00th=[ 379], 90.00th=[ 404], 95.00th=[ 424], 00:19:05.558 | 99.00th=[ 478], 99.50th=[ 506], 99.90th=[ 523], 99.95th=[ 3425], 00:19:05.558 | 99.99th=[ 3425] 00:19:05.558 bw ( KiB/s): min= 4656, max= 4656, per=28.68%, avg=4656.00, stdev= 0.00, samples=1 00:19:05.558 iops : min= 1164, max= 1164, avg=1164.00, stdev= 0.00, samples=1 00:19:05.558 lat (usec) : 250=3.61%, 500=79.40%, 750=16.01%, 1000=0.82% 00:19:05.558 lat (msec) : 2=0.13%, 4=0.04% 00:19:05.558 cpu : usr=2.00%, sys=5.50%, ctx=2330, majf=0, minf=2 00:19:05.558 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:05.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.558 issued rwts: total=1024,1306,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.558 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:05.558 00:19:05.558 Run status group 0 (all jobs): 00:19:05.558 READ: bw=13.2MiB/s (13.9MB/s), 1577KiB/s-4092KiB/s (1615kB/s-4190kB/s), io=13.6MiB (14.2MB), run=1001-1027msec 00:19:05.558 WRITE: bw=15.9MiB/s (16.6MB/s), 1994KiB/s-5219KiB/s (2042kB/s-5344kB/s), io=16.3MiB (17.1MB), run=1001-1027msec 00:19:05.558 00:19:05.558 Disk stats (read/write): 00:19:05.558 nvme0n1: ios=449/512, merge=0/0, ticks=636/156, in_queue=792, util=86.97% 00:19:05.558 nvme0n2: ios=778/1024, merge=0/0, ticks=499/300, in_queue=799, util=87.27% 00:19:05.558 nvme0n3: ios=886/1024, merge=0/0, ticks=443/314, in_queue=757, util=88.88% 00:19:05.558 nvme0n4: ios=939/1024, merge=0/0, ticks=474/300, in_queue=774, util=89.63% 00:19:05.558 14:21:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:05.558 [global] 00:19:05.558 thread=1 00:19:05.558 invalidate=1 00:19:05.558 rw=randwrite 00:19:05.558 time_based=1 00:19:05.558 runtime=1 00:19:05.558 ioengine=libaio 00:19:05.558 direct=1 00:19:05.558 bs=4096 00:19:05.558 iodepth=1 00:19:05.558 norandommap=0 00:19:05.558 numjobs=1 00:19:05.558 00:19:05.558 verify_dump=1 00:19:05.558 verify_backlog=512 00:19:05.558 verify_state_save=0 00:19:05.558 do_verify=1 00:19:05.558 verify=crc32c-intel 00:19:05.558 [job0] 00:19:05.558 filename=/dev/nvme0n1 00:19:05.558 [job1] 00:19:05.558 filename=/dev/nvme0n2 00:19:05.558 [job2] 00:19:05.558 filename=/dev/nvme0n3 00:19:05.558 [job3] 00:19:05.558 filename=/dev/nvme0n4 00:19:05.816 Could not set queue depth (nvme0n1) 00:19:05.816 Could not set queue depth (nvme0n2) 00:19:05.816 Could not set queue depth (nvme0n3) 00:19:05.816 Could not set queue depth (nvme0n4) 00:19:05.816 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.816 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.816 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.816 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.816 fio-3.35 00:19:05.816 Starting 4 threads 00:19:07.189 00:19:07.189 job0: (groupid=0, jobs=1): err= 0: pid=1384365: Wed Jul 10 14:21:16 2024 00:19:07.189 read: IOPS=19, BW=79.4KiB/s (81.3kB/s)(80.0KiB/1008msec) 00:19:07.189 slat (nsec): min=12689, max=37218, avg=23565.50, stdev=9728.87 00:19:07.189 clat (usec): min=40910, max=41342, avg=40991.25, stdev=91.62 00:19:07.189 lat (usec): min=40939, max=41363, avg=41014.82, stdev=89.94 00:19:07.189 clat percentiles (usec): 00:19:07.189 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:07.189 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:07.189 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:07.189 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:07.189 | 99.99th=[41157] 00:19:07.189 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:19:07.189 slat (nsec): min=7895, max=77966, avg=22979.01, stdev=11904.00 00:19:07.189 clat (usec): min=240, max=507, avg=328.31, stdev=47.51 00:19:07.189 lat (usec): min=250, max=537, avg=351.29, stdev=50.97 00:19:07.189 clat percentiles (usec): 00:19:07.189 | 1.00th=[ 258], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 293], 00:19:07.189 | 30.00th=[ 302], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 326], 00:19:07.189 | 70.00th=[ 338], 80.00th=[ 367], 90.00th=[ 404], 95.00th=[ 424], 00:19:07.189 | 99.00th=[ 465], 99.50th=[ 486], 99.90th=[ 506], 99.95th=[ 506], 00:19:07.189 | 99.99th=[ 506] 00:19:07.189 bw ( KiB/s): min= 4096, max= 4096, per=41.24%, avg=4096.00, stdev= 0.00, samples=1 00:19:07.189 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:07.189 lat (usec) : 250=0.56%, 500=95.49%, 750=0.19% 00:19:07.189 lat (msec) : 50=3.76% 00:19:07.189 cpu : usr=0.70%, sys=1.59%, ctx=533, majf=0, minf=1 00:19:07.189 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.189 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.189 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.189 job1: (groupid=0, jobs=1): err= 0: pid=1384366: Wed Jul 10 14:21:16 2024 00:19:07.189 read: IOPS=19, BW=78.8KiB/s (80.7kB/s)(80.0KiB/1015msec) 00:19:07.189 slat (nsec): min=12770, max=37462, avg=26542.65, stdev=9435.88 00:19:07.189 clat (usec): min=40834, max=42043, avg=41010.28, stdev=249.93 00:19:07.189 lat (usec): min=40868, max=42057, avg=41036.82, stdev=246.28 00:19:07.189 clat percentiles (usec): 00:19:07.189 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:07.189 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:07.189 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:07.189 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:07.189 | 99.99th=[42206] 00:19:07.189 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:19:07.189 slat (nsec): min=8034, max=73806, avg=23390.82, stdev=11696.76 00:19:07.189 clat (usec): min=233, max=772, avg=340.05, stdev=74.94 00:19:07.189 lat (usec): min=242, max=802, avg=363.44, stdev=73.92 00:19:07.189 clat percentiles (usec): 00:19:07.189 | 1.00th=[ 251], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 277], 00:19:07.189 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 314], 60.00th=[ 330], 00:19:07.189 | 70.00th=[ 379], 80.00th=[ 404], 90.00th=[ 445], 95.00th=[ 474], 00:19:07.189 | 99.00th=[ 553], 99.50th=[ 627], 99.90th=[ 775], 99.95th=[ 775], 00:19:07.189 | 99.99th=[ 775] 00:19:07.189 bw ( KiB/s): min= 4096, max= 4096, per=41.24%, avg=4096.00, stdev= 0.00, samples=1 00:19:07.189 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:07.189 lat (usec) : 250=0.94%, 500=93.23%, 750=1.88%, 1000=0.19% 00:19:07.189 lat (msec) : 50=3.76% 00:19:07.189 cpu : usr=1.28%, sys=1.08%, ctx=533, majf=0, minf=1 00:19:07.189 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.189 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.189 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.189 job2: (groupid=0, jobs=1): err= 0: pid=1384367: Wed Jul 10 14:21:16 2024 00:19:07.189 read: IOPS=21, BW=85.4KiB/s (87.4kB/s)(88.0KiB/1031msec) 00:19:07.189 slat (nsec): min=13000, max=34257, avg=25182.82, stdev=8926.47 00:19:07.189 clat (usec): min=523, max=41052, avg=39115.99, stdev=8620.28 00:19:07.189 lat (usec): min=543, max=41071, avg=39141.18, stdev=8621.38 00:19:07.189 clat percentiles (usec): 00:19:07.189 | 1.00th=[ 523], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:07.189 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:07.189 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:07.189 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:07.189 | 99.99th=[41157] 00:19:07.189 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:19:07.189 slat (nsec): min=7009, max=56675, avg=19112.92, stdev=9271.30 00:19:07.189 clat (usec): min=240, max=520, avg=298.87, stdev=44.31 00:19:07.189 lat (usec): min=254, max=552, avg=317.98, stdev=44.93 00:19:07.189 clat percentiles (usec): 00:19:07.189 | 1.00th=[ 249], 5.00th=[ 255], 10.00th=[ 262], 20.00th=[ 265], 00:19:07.189 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:19:07.189 | 70.00th=[ 310], 80.00th=[ 326], 90.00th=[ 367], 95.00th=[ 392], 00:19:07.189 | 99.00th=[ 449], 99.50th=[ 465], 99.90th=[ 523], 99.95th=[ 523], 00:19:07.189 | 99.99th=[ 523] 00:19:07.189 bw ( KiB/s): min= 4096, max= 4096, per=41.24%, avg=4096.00, stdev= 0.00, samples=1 00:19:07.189 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:07.189 lat (usec) : 250=1.12%, 500=94.57%, 750=0.37% 00:19:07.189 lat (msec) : 50=3.93% 00:19:07.189 cpu : usr=0.58%, sys=0.78%, ctx=536, majf=0, minf=1 00:19:07.189 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.189 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.189 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.189 job3: (groupid=0, jobs=1): err= 0: pid=1384368: Wed Jul 10 14:21:16 2024 00:19:07.189 read: IOPS=966, BW=3864KiB/s (3957kB/s)(3868KiB/1001msec) 00:19:07.189 slat (nsec): min=6207, max=54602, avg=14034.16, stdev=6055.63 00:19:07.189 clat (usec): min=319, max=41070, avg=646.79, stdev=3187.70 00:19:07.189 lat (usec): min=325, max=41104, avg=660.83, stdev=3189.29 00:19:07.189 clat percentiles (usec): 00:19:07.189 | 1.00th=[ 330], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 363], 00:19:07.189 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 388], 00:19:07.189 | 70.00th=[ 400], 80.00th=[ 416], 90.00th=[ 457], 95.00th=[ 510], 00:19:07.189 | 99.00th=[ 938], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:07.189 | 99.99th=[41157] 00:19:07.189 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:07.189 slat (nsec): min=7424, max=72341, avg=21092.88, stdev=10687.59 00:19:07.189 clat (usec): min=217, max=764, avg=317.85, stdev=65.67 00:19:07.189 lat (usec): min=225, max=775, avg=338.94, stdev=68.41 00:19:07.189 clat percentiles (usec): 00:19:07.189 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 247], 20.00th=[ 269], 00:19:07.189 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 318], 00:19:07.189 | 70.00th=[ 326], 80.00th=[ 351], 90.00th=[ 400], 95.00th=[ 449], 00:19:07.189 | 99.00th=[ 529], 99.50th=[ 562], 99.90th=[ 742], 99.95th=[ 766], 00:19:07.189 | 99.99th=[ 766] 00:19:07.189 bw ( KiB/s): min= 6648, max= 6648, per=66.93%, avg=6648.00, stdev= 0.00, samples=1 00:19:07.189 iops : min= 1662, max= 1662, avg=1662.00, stdev= 0.00, samples=1 00:19:07.189 lat (usec) : 250=5.88%, 500=90.46%, 750=3.11%, 1000=0.10% 00:19:07.190 lat (msec) : 2=0.15%, 50=0.30% 00:19:07.190 cpu : usr=2.80%, sys=4.20%, ctx=1992, majf=0, minf=2 00:19:07.190 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.190 issued rwts: total=967,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.190 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.190 00:19:07.190 Run status group 0 (all jobs): 00:19:07.190 READ: bw=3992KiB/s (4088kB/s), 78.8KiB/s-3864KiB/s (80.7kB/s-3957kB/s), io=4116KiB (4215kB), run=1001-1031msec 00:19:07.190 WRITE: bw=9932KiB/s (10.2MB/s), 1986KiB/s-4092KiB/s (2034kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1031msec 00:19:07.190 00:19:07.190 Disk stats (read/write): 00:19:07.190 nvme0n1: ios=59/512, merge=0/0, ticks=869/148, in_queue=1017, util=100.00% 00:19:07.190 nvme0n2: ios=56/512, merge=0/0, ticks=1263/156, in_queue=1419, util=97.66% 00:19:07.190 nvme0n3: ios=75/512, merge=0/0, ticks=1647/150, in_queue=1797, util=97.81% 00:19:07.190 nvme0n4: ios=635/1024, merge=0/0, ticks=1396/308, in_queue=1704, util=97.79% 00:19:07.190 14:21:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:07.190 [global] 00:19:07.190 thread=1 00:19:07.190 invalidate=1 00:19:07.190 rw=write 00:19:07.190 time_based=1 00:19:07.190 runtime=1 00:19:07.190 ioengine=libaio 00:19:07.190 direct=1 00:19:07.190 bs=4096 00:19:07.190 iodepth=128 00:19:07.190 norandommap=0 00:19:07.190 numjobs=1 00:19:07.190 00:19:07.190 verify_dump=1 00:19:07.190 verify_backlog=512 00:19:07.190 verify_state_save=0 00:19:07.190 do_verify=1 00:19:07.190 verify=crc32c-intel 00:19:07.190 [job0] 00:19:07.190 filename=/dev/nvme0n1 00:19:07.190 [job1] 00:19:07.190 filename=/dev/nvme0n2 00:19:07.190 [job2] 00:19:07.190 filename=/dev/nvme0n3 00:19:07.190 [job3] 00:19:07.190 filename=/dev/nvme0n4 00:19:07.190 Could not set queue depth (nvme0n1) 00:19:07.190 Could not set queue depth (nvme0n2) 00:19:07.190 Could not set queue depth (nvme0n3) 00:19:07.190 Could not set queue depth (nvme0n4) 00:19:07.448 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:07.448 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:07.448 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:07.448 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:07.448 fio-3.35 00:19:07.448 Starting 4 threads 00:19:08.834 00:19:08.834 job0: (groupid=0, jobs=1): err= 0: pid=1384594: Wed Jul 10 14:21:17 2024 00:19:08.834 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:19:08.834 slat (usec): min=2, max=13977, avg=143.02, stdev=937.47 00:19:08.834 clat (usec): min=2575, max=46045, avg=18057.90, stdev=6742.37 00:19:08.834 lat (usec): min=2579, max=46050, avg=18200.92, stdev=6820.53 00:19:08.834 clat percentiles (usec): 00:19:08.834 | 1.00th=[ 5538], 5.00th=[ 9896], 10.00th=[11600], 20.00th=[12518], 00:19:08.834 | 30.00th=[14484], 40.00th=[17171], 50.00th=[17433], 60.00th=[17957], 00:19:08.834 | 70.00th=[19530], 80.00th=[21365], 90.00th=[25560], 95.00th=[32375], 00:19:08.834 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:19:08.834 | 99.99th=[45876] 00:19:08.834 write: IOPS=3100, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1008msec); 0 zone resets 00:19:08.834 slat (usec): min=4, max=41040, avg=168.34, stdev=1308.73 00:19:08.834 clat (msec): min=2, max=111, avg=19.90, stdev=10.19 00:19:08.834 lat (msec): min=2, max=111, avg=20.07, stdev=10.36 00:19:08.834 clat percentiles (msec): 00:19:08.834 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 13], 00:19:08.834 | 30.00th=[ 14], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 20], 00:19:08.834 | 70.00th=[ 26], 80.00th=[ 31], 90.00th=[ 32], 95.00th=[ 35], 00:19:08.834 | 99.00th=[ 38], 99.50th=[ 47], 99.90th=[ 112], 99.95th=[ 112], 00:19:08.834 | 99.99th=[ 112] 00:19:08.834 bw ( KiB/s): min=12263, max=12288, per=23.90%, avg=12275.50, stdev=17.68, samples=2 00:19:08.834 iops : min= 3065, max= 3072, avg=3068.50, stdev= 4.95, samples=2 00:19:08.834 lat (msec) : 4=0.73%, 10=9.70%, 20=57.11%, 50=32.23%, 100=0.13% 00:19:08.834 lat (msec) : 250=0.11% 00:19:08.834 cpu : usr=3.08%, sys=6.16%, ctx=278, majf=0, minf=1 00:19:08.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:08.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.834 issued rwts: total=3072,3125,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.834 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.834 job1: (groupid=0, jobs=1): err= 0: pid=1384595: Wed Jul 10 14:21:17 2024 00:19:08.834 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:19:08.834 slat (usec): min=3, max=14820, avg=135.55, stdev=753.18 00:19:08.834 clat (usec): min=10225, max=35783, avg=16792.39, stdev=4747.64 00:19:08.834 lat (usec): min=10290, max=35793, avg=16927.93, stdev=4776.46 00:19:08.834 clat percentiles (usec): 00:19:08.834 | 1.00th=[11076], 5.00th=[12125], 10.00th=[12780], 20.00th=[13435], 00:19:08.834 | 30.00th=[13829], 40.00th=[14484], 50.00th=[14746], 60.00th=[15139], 00:19:08.834 | 70.00th=[16319], 80.00th=[22152], 90.00th=[23462], 95.00th=[27132], 00:19:08.834 | 99.00th=[29754], 99.50th=[30016], 99.90th=[35914], 99.95th=[35914], 00:19:08.834 | 99.99th=[35914] 00:19:08.834 write: IOPS=3392, BW=13.3MiB/s (13.9MB/s)(13.3MiB/1004msec); 0 zone resets 00:19:08.834 slat (usec): min=4, max=40983, avg=163.75, stdev=1319.58 00:19:08.834 clat (usec): min=500, max=97703, avg=19125.50, stdev=9943.51 00:19:08.834 lat (usec): min=3138, max=97760, avg=19289.25, stdev=10072.71 00:19:08.834 clat percentiles (usec): 00:19:08.834 | 1.00th=[ 8586], 5.00th=[11600], 10.00th=[12387], 20.00th=[13042], 00:19:08.834 | 30.00th=[13566], 40.00th=[14353], 50.00th=[15401], 60.00th=[16909], 00:19:08.834 | 70.00th=[20317], 80.00th=[22938], 90.00th=[30016], 95.00th=[43779], 00:19:08.834 | 99.00th=[56886], 99.50th=[56886], 99.90th=[94897], 99.95th=[98042], 00:19:08.834 | 99.99th=[98042] 00:19:08.834 bw ( KiB/s): min=12263, max=13936, per=25.50%, avg=13099.50, stdev=1182.99, samples=2 00:19:08.834 iops : min= 3065, max= 3484, avg=3274.50, stdev=296.28, samples=2 00:19:08.834 lat (usec) : 750=0.02% 00:19:08.834 lat (msec) : 4=0.49%, 10=0.51%, 20=70.42%, 50=26.89%, 100=1.67% 00:19:08.834 cpu : usr=3.99%, sys=5.08%, ctx=336, majf=0, minf=1 00:19:08.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:08.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.834 issued rwts: total=3072,3406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.834 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.834 job2: (groupid=0, jobs=1): err= 0: pid=1384596: Wed Jul 10 14:21:17 2024 00:19:08.834 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:19:08.834 slat (usec): min=2, max=19891, avg=222.47, stdev=1441.48 00:19:08.834 clat (usec): min=9611, max=64047, avg=27257.33, stdev=12697.13 00:19:08.834 lat (usec): min=9621, max=64056, avg=27479.80, stdev=12779.30 00:19:08.834 clat percentiles (usec): 00:19:08.834 | 1.00th=[ 9765], 5.00th=[13698], 10.00th=[14746], 20.00th=[16581], 00:19:08.834 | 30.00th=[17957], 40.00th=[19792], 50.00th=[25035], 60.00th=[29492], 00:19:08.834 | 70.00th=[30802], 80.00th=[34341], 90.00th=[44827], 95.00th=[55313], 00:19:08.834 | 99.00th=[61080], 99.50th=[64226], 99.90th=[64226], 99.95th=[64226], 00:19:08.834 | 99.99th=[64226] 00:19:08.834 write: IOPS=2482, BW=9931KiB/s (10.2MB/s)(9.79MiB/1009msec); 0 zone resets 00:19:08.834 slat (usec): min=4, max=19132, avg=212.25, stdev=1214.63 00:19:08.834 clat (usec): min=769, max=72934, avg=28539.31, stdev=17041.71 00:19:08.834 lat (usec): min=10587, max=72947, avg=28751.56, stdev=17125.78 00:19:08.834 clat percentiles (usec): 00:19:08.834 | 1.00th=[10814], 5.00th=[13435], 10.00th=[15139], 20.00th=[15664], 00:19:08.834 | 30.00th=[16188], 40.00th=[16712], 50.00th=[20579], 60.00th=[25035], 00:19:08.834 | 70.00th=[34341], 80.00th=[43254], 90.00th=[56361], 95.00th=[67634], 00:19:08.834 | 99.00th=[72877], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:19:08.834 | 99.99th=[72877] 00:19:08.834 bw ( KiB/s): min= 6832, max=12184, per=18.51%, avg=9508.00, stdev=3784.44, samples=2 00:19:08.834 iops : min= 1708, max= 3046, avg=2377.00, stdev=946.11, samples=2 00:19:08.834 lat (usec) : 1000=0.02% 00:19:08.834 lat (msec) : 10=0.66%, 20=45.27%, 50=41.75%, 100=12.30% 00:19:08.834 cpu : usr=2.88%, sys=3.67%, ctx=241, majf=0, minf=1 00:19:08.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:08.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.834 issued rwts: total=2048,2505,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.834 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.834 job3: (groupid=0, jobs=1): err= 0: pid=1384597: Wed Jul 10 14:21:17 2024 00:19:08.834 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:19:08.834 slat (usec): min=3, max=13451, avg=122.37, stdev=727.75 00:19:08.834 clat (usec): min=3773, max=37639, avg=16658.59, stdev=3722.53 00:19:08.834 lat (usec): min=3784, max=37723, avg=16780.96, stdev=3749.58 00:19:08.834 clat percentiles (usec): 00:19:08.834 | 1.00th=[ 6783], 5.00th=[11731], 10.00th=[12911], 20.00th=[14877], 00:19:08.834 | 30.00th=[15401], 40.00th=[15795], 50.00th=[16188], 60.00th=[16581], 00:19:08.834 | 70.00th=[16712], 80.00th=[17957], 90.00th=[21103], 95.00th=[24249], 00:19:08.834 | 99.00th=[27395], 99.50th=[27395], 99.90th=[27657], 99.95th=[36963], 00:19:08.834 | 99.99th=[37487] 00:19:08.834 write: IOPS=3886, BW=15.2MiB/s (15.9MB/s)(15.3MiB/1009msec); 0 zone resets 00:19:08.834 slat (usec): min=4, max=12439, avg=130.83, stdev=764.40 00:19:08.834 clat (usec): min=3664, max=34488, avg=17237.54, stdev=3105.55 00:19:08.834 lat (usec): min=5502, max=38014, avg=17368.37, stdev=3166.81 00:19:08.834 clat percentiles (usec): 00:19:08.834 | 1.00th=[ 9110], 5.00th=[13042], 10.00th=[14484], 20.00th=[15664], 00:19:08.834 | 30.00th=[16188], 40.00th=[16581], 50.00th=[16909], 60.00th=[17433], 00:19:08.834 | 70.00th=[17957], 80.00th=[18482], 90.00th=[20841], 95.00th=[23200], 00:19:08.834 | 99.00th=[27395], 99.50th=[27919], 99.90th=[28967], 99.95th=[34341], 00:19:08.834 | 99.99th=[34341] 00:19:08.834 bw ( KiB/s): min=13960, max=16351, per=29.50%, avg=15155.50, stdev=1690.69, samples=2 00:19:08.834 iops : min= 3490, max= 4087, avg=3788.50, stdev=422.14, samples=2 00:19:08.834 lat (msec) : 4=0.09%, 10=2.48%, 20=84.14%, 50=13.28% 00:19:08.834 cpu : usr=3.57%, sys=6.75%, ctx=361, majf=0, minf=1 00:19:08.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:08.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.835 issued rwts: total=3584,3921,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.835 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.835 00:19:08.835 Run status group 0 (all jobs): 00:19:08.835 READ: bw=45.6MiB/s (47.8MB/s), 8119KiB/s-13.9MiB/s (8314kB/s-14.5MB/s), io=46.0MiB (48.2MB), run=1004-1009msec 00:19:08.835 WRITE: bw=50.2MiB/s (52.6MB/s), 9931KiB/s-15.2MiB/s (10.2MB/s-15.9MB/s), io=50.6MiB (53.1MB), run=1004-1009msec 00:19:08.835 00:19:08.835 Disk stats (read/write): 00:19:08.835 nvme0n1: ios=2198/2560, merge=0/0, ticks=24395/28025, in_queue=52420, util=99.60% 00:19:08.835 nvme0n2: ios=2575/2985, merge=0/0, ticks=13201/16286, in_queue=29487, util=96.54% 00:19:08.835 nvme0n3: ios=1741/2048, merge=0/0, ticks=15228/15208, in_queue=30436, util=95.28% 00:19:08.835 nvme0n4: ios=3117/3438, merge=0/0, ticks=17125/19454, in_queue=36579, util=97.88% 00:19:08.835 14:21:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:08.835 [global] 00:19:08.835 thread=1 00:19:08.835 invalidate=1 00:19:08.835 rw=randwrite 00:19:08.835 time_based=1 00:19:08.835 runtime=1 00:19:08.835 ioengine=libaio 00:19:08.835 direct=1 00:19:08.835 bs=4096 00:19:08.835 iodepth=128 00:19:08.835 norandommap=0 00:19:08.835 numjobs=1 00:19:08.835 00:19:08.835 verify_dump=1 00:19:08.835 verify_backlog=512 00:19:08.835 verify_state_save=0 00:19:08.835 do_verify=1 00:19:08.835 verify=crc32c-intel 00:19:08.835 [job0] 00:19:08.835 filename=/dev/nvme0n1 00:19:08.835 [job1] 00:19:08.835 filename=/dev/nvme0n2 00:19:08.835 [job2] 00:19:08.835 filename=/dev/nvme0n3 00:19:08.835 [job3] 00:19:08.835 filename=/dev/nvme0n4 00:19:08.835 Could not set queue depth (nvme0n1) 00:19:08.835 Could not set queue depth (nvme0n2) 00:19:08.835 Could not set queue depth (nvme0n3) 00:19:08.835 Could not set queue depth (nvme0n4) 00:19:08.835 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.835 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.835 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.835 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.835 fio-3.35 00:19:08.835 Starting 4 threads 00:19:10.210 00:19:10.210 job0: (groupid=0, jobs=1): err= 0: pid=1384821: Wed Jul 10 14:21:19 2024 00:19:10.210 read: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec) 00:19:10.210 slat (usec): min=2, max=16097, avg=168.07, stdev=1187.11 00:19:10.210 clat (usec): min=7649, max=65248, avg=20160.59, stdev=7941.95 00:19:10.210 lat (usec): min=7654, max=65254, avg=20328.66, stdev=8048.71 00:19:10.210 clat percentiles (usec): 00:19:10.210 | 1.00th=[ 9765], 5.00th=[12649], 10.00th=[13435], 20.00th=[15008], 00:19:10.210 | 30.00th=[16057], 40.00th=[16909], 50.00th=[17171], 60.00th=[18744], 00:19:10.210 | 70.00th=[20317], 80.00th=[25822], 90.00th=[31327], 95.00th=[35390], 00:19:10.210 | 99.00th=[46400], 99.50th=[56886], 99.90th=[65274], 99.95th=[65274], 00:19:10.210 | 99.99th=[65274] 00:19:10.210 write: IOPS=2794, BW=10.9MiB/s (11.4MB/s)(11.1MiB/1014msec); 0 zone resets 00:19:10.210 slat (usec): min=3, max=31619, avg=185.84, stdev=1313.79 00:19:10.210 clat (usec): min=3992, max=92715, avg=27088.69, stdev=17878.51 00:19:10.210 lat (usec): min=3999, max=92722, avg=27274.54, stdev=17976.83 00:19:10.210 clat percentiles (usec): 00:19:10.210 | 1.00th=[ 5669], 5.00th=[ 9372], 10.00th=[10552], 20.00th=[13173], 00:19:10.210 | 30.00th=[14222], 40.00th=[17171], 50.00th=[19268], 60.00th=[25822], 00:19:10.210 | 70.00th=[34866], 80.00th=[37487], 90.00th=[49021], 95.00th=[69731], 00:19:10.210 | 99.00th=[85459], 99.50th=[87557], 99.90th=[92799], 99.95th=[92799], 00:19:10.210 | 99.99th=[92799] 00:19:10.210 bw ( KiB/s): min= 9360, max=12288, per=25.31%, avg=10824.00, stdev=2070.41, samples=2 00:19:10.210 iops : min= 2340, max= 3072, avg=2706.00, stdev=517.60, samples=2 00:19:10.210 lat (msec) : 4=0.11%, 10=4.38%, 20=56.80%, 50=33.39%, 100=5.32% 00:19:10.210 cpu : usr=1.97%, sys=3.16%, ctx=244, majf=0, minf=1 00:19:10.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:10.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.210 issued rwts: total=2560,2834,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.210 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.210 job1: (groupid=0, jobs=1): err= 0: pid=1384822: Wed Jul 10 14:21:19 2024 00:19:10.210 read: IOPS=1505, BW=6024KiB/s (6168kB/s)(6144KiB/1020msec) 00:19:10.210 slat (usec): min=3, max=32917, avg=236.20, stdev=1688.32 00:19:10.210 clat (usec): min=9533, max=83057, avg=27724.56, stdev=14183.50 00:19:10.210 lat (usec): min=9546, max=83075, avg=27960.75, stdev=14317.16 00:19:10.210 clat percentiles (usec): 00:19:10.210 | 1.00th=[10814], 5.00th=[12911], 10.00th=[13435], 20.00th=[14877], 00:19:10.210 | 30.00th=[20579], 40.00th=[21627], 50.00th=[25822], 60.00th=[26346], 00:19:10.210 | 70.00th=[29492], 80.00th=[35390], 90.00th=[45876], 95.00th=[58459], 00:19:10.210 | 99.00th=[81265], 99.50th=[82314], 99.90th=[83362], 99.95th=[83362], 00:19:10.210 | 99.99th=[83362] 00:19:10.210 write: IOPS=1919, BW=7678KiB/s (7863kB/s)(7832KiB/1020msec); 0 zone resets 00:19:10.210 slat (usec): min=4, max=16777, avg=318.08, stdev=1494.37 00:19:10.210 clat (usec): min=1532, max=151346, avg=44296.83, stdev=32350.55 00:19:10.210 lat (usec): min=1544, max=151368, avg=44614.91, stdev=32557.87 00:19:10.210 clat percentiles (msec): 00:19:10.210 | 1.00th=[ 10], 5.00th=[ 15], 10.00th=[ 18], 20.00th=[ 20], 00:19:10.210 | 30.00th=[ 22], 40.00th=[ 30], 50.00th=[ 34], 60.00th=[ 38], 00:19:10.210 | 70.00th=[ 47], 80.00th=[ 70], 90.00th=[ 88], 95.00th=[ 123], 00:19:10.210 | 99.00th=[ 146], 99.50th=[ 150], 99.90th=[ 153], 99.95th=[ 153], 00:19:10.210 | 99.99th=[ 153] 00:19:10.211 bw ( KiB/s): min= 6272, max= 8384, per=17.13%, avg=7328.00, stdev=1493.41, samples=2 00:19:10.211 iops : min= 1568, max= 2096, avg=1832.00, stdev=373.35, samples=2 00:19:10.211 lat (msec) : 2=0.06%, 10=0.72%, 20=24.99%, 50=55.95%, 100=14.20% 00:19:10.211 lat (msec) : 250=4.09% 00:19:10.211 cpu : usr=1.96%, sys=3.14%, ctx=212, majf=0, minf=1 00:19:10.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:10.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.211 issued rwts: total=1536,1958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.211 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.211 job2: (groupid=0, jobs=1): err= 0: pid=1384824: Wed Jul 10 14:21:19 2024 00:19:10.211 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:19:10.211 slat (usec): min=2, max=14821, avg=103.27, stdev=849.63 00:19:10.211 clat (usec): min=4966, max=66419, avg=14908.61, stdev=5613.05 00:19:10.211 lat (usec): min=4974, max=66422, avg=15011.89, stdev=5653.80 00:19:10.211 clat percentiles (usec): 00:19:10.211 | 1.00th=[ 7046], 5.00th=[10028], 10.00th=[10421], 20.00th=[12780], 00:19:10.211 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13698], 60.00th=[14353], 00:19:10.211 | 70.00th=[15270], 80.00th=[16319], 90.00th=[20841], 95.00th=[22152], 00:19:10.211 | 99.00th=[28181], 99.50th=[62653], 99.90th=[64226], 99.95th=[64226], 00:19:10.211 | 99.99th=[66323] 00:19:10.211 write: IOPS=4538, BW=17.7MiB/s (18.6MB/s)(17.9MiB/1009msec); 0 zone resets 00:19:10.211 slat (usec): min=3, max=16118, avg=105.37, stdev=731.11 00:19:10.211 clat (usec): min=1550, max=57312, avg=14620.21, stdev=7478.51 00:19:10.211 lat (usec): min=1611, max=57317, avg=14725.58, stdev=7519.80 00:19:10.211 clat percentiles (usec): 00:19:10.211 | 1.00th=[ 4686], 5.00th=[ 7177], 10.00th=[ 7767], 20.00th=[ 9503], 00:19:10.211 | 30.00th=[12518], 40.00th=[13829], 50.00th=[14222], 60.00th=[14746], 00:19:10.211 | 70.00th=[15008], 80.00th=[16057], 90.00th=[18744], 95.00th=[25822], 00:19:10.211 | 99.00th=[50594], 99.50th=[55313], 99.90th=[57410], 99.95th=[57410], 00:19:10.211 | 99.99th=[57410] 00:19:10.211 bw ( KiB/s): min=16984, max=18624, per=41.62%, avg=17804.00, stdev=1159.66, samples=2 00:19:10.211 iops : min= 4246, max= 4656, avg=4451.00, stdev=289.91, samples=2 00:19:10.211 lat (msec) : 2=0.01%, 4=0.31%, 10=14.81%, 20=76.63%, 50=7.15% 00:19:10.211 lat (msec) : 100=1.08% 00:19:10.211 cpu : usr=4.56%, sys=6.94%, ctx=405, majf=0, minf=1 00:19:10.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:10.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.211 issued rwts: total=4096,4579,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.211 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.211 job3: (groupid=0, jobs=1): err= 0: pid=1384826: Wed Jul 10 14:21:19 2024 00:19:10.211 read: IOPS=1075, BW=4300KiB/s (4404kB/s)(4352KiB/1012msec) 00:19:10.211 slat (usec): min=2, max=33159, avg=329.11, stdev=2193.66 00:19:10.211 clat (msec): min=8, max=123, avg=50.25, stdev=26.01 00:19:10.211 lat (msec): min=11, max=136, avg=50.58, stdev=26.12 00:19:10.211 clat percentiles (msec): 00:19:10.211 | 1.00th=[ 20], 5.00th=[ 21], 10.00th=[ 25], 20.00th=[ 28], 00:19:10.211 | 30.00th=[ 34], 40.00th=[ 38], 50.00th=[ 38], 60.00th=[ 52], 00:19:10.211 | 70.00th=[ 58], 80.00th=[ 74], 90.00th=[ 77], 95.00th=[ 108], 00:19:10.211 | 99.00th=[ 124], 99.50th=[ 124], 99.90th=[ 124], 99.95th=[ 124], 00:19:10.211 | 99.99th=[ 124] 00:19:10.211 write: IOPS=1517, BW=6071KiB/s (6217kB/s)(6144KiB/1012msec); 0 zone resets 00:19:10.211 slat (usec): min=3, max=37534, avg=414.99, stdev=2622.45 00:19:10.211 clat (msec): min=16, max=110, avg=45.36, stdev=27.44 00:19:10.211 lat (msec): min=16, max=110, avg=45.78, stdev=27.60 00:19:10.211 clat percentiles (msec): 00:19:10.211 | 1.00th=[ 17], 5.00th=[ 20], 10.00th=[ 20], 20.00th=[ 21], 00:19:10.211 | 30.00th=[ 28], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 44], 00:19:10.211 | 70.00th=[ 60], 80.00th=[ 74], 90.00th=[ 90], 95.00th=[ 104], 00:19:10.211 | 99.00th=[ 111], 99.50th=[ 111], 99.90th=[ 111], 99.95th=[ 111], 00:19:10.211 | 99.99th=[ 111] 00:19:10.211 bw ( KiB/s): min= 4160, max= 7616, per=13.77%, avg=5888.00, stdev=2443.76, samples=2 00:19:10.211 iops : min= 1040, max= 1904, avg=1472.00, stdev=610.94, samples=2 00:19:10.211 lat (msec) : 10=0.04%, 20=11.32%, 50=50.50%, 100=31.10%, 250=7.05% 00:19:10.211 cpu : usr=1.19%, sys=1.38%, ctx=112, majf=0, minf=1 00:19:10.211 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:19:10.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.211 issued rwts: total=1088,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.211 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.211 00:19:10.211 Run status group 0 (all jobs): 00:19:10.211 READ: bw=35.5MiB/s (37.3MB/s), 4300KiB/s-15.9MiB/s (4404kB/s-16.6MB/s), io=36.2MiB (38.0MB), run=1009-1020msec 00:19:10.211 WRITE: bw=41.8MiB/s (43.8MB/s), 6071KiB/s-17.7MiB/s (6217kB/s-18.6MB/s), io=42.6MiB (44.7MB), run=1009-1020msec 00:19:10.211 00:19:10.211 Disk stats (read/write): 00:19:10.211 nvme0n1: ios=2066/2535, merge=0/0, ticks=34970/49841, in_queue=84811, util=89.78% 00:19:10.211 nvme0n2: ios=1581/1607, merge=0/0, ticks=42343/62962, in_queue=105305, util=94.52% 00:19:10.211 nvme0n3: ios=3600/3591, merge=0/0, ticks=52220/46447, in_queue=98667, util=97.71% 00:19:10.211 nvme0n4: ios=1013/1024, merge=0/0, ticks=13261/16871, in_queue=30132, util=96.95% 00:19:10.211 14:21:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:10.211 14:21:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1384969 00:19:10.211 14:21:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:10.211 14:21:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:10.211 [global] 00:19:10.211 thread=1 00:19:10.211 invalidate=1 00:19:10.211 rw=read 00:19:10.211 time_based=1 00:19:10.211 runtime=10 00:19:10.211 ioengine=libaio 00:19:10.211 direct=1 00:19:10.211 bs=4096 00:19:10.211 iodepth=1 00:19:10.211 norandommap=1 00:19:10.211 numjobs=1 00:19:10.211 00:19:10.211 [job0] 00:19:10.211 filename=/dev/nvme0n1 00:19:10.211 [job1] 00:19:10.211 filename=/dev/nvme0n2 00:19:10.211 [job2] 00:19:10.211 filename=/dev/nvme0n3 00:19:10.211 [job3] 00:19:10.211 filename=/dev/nvme0n4 00:19:10.211 Could not set queue depth (nvme0n1) 00:19:10.211 Could not set queue depth (nvme0n2) 00:19:10.211 Could not set queue depth (nvme0n3) 00:19:10.211 Could not set queue depth (nvme0n4) 00:19:10.211 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:10.211 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:10.211 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:10.211 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:10.211 fio-3.35 00:19:10.211 Starting 4 threads 00:19:13.490 14:21:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:13.490 14:21:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:13.490 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=3588096, buflen=4096 00:19:13.490 fio: pid=1385101, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:13.490 14:21:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:13.490 14:21:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:13.490 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=323584, buflen=4096 00:19:13.490 fio: pid=1385089, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:13.746 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=34254848, buflen=4096 00:19:13.746 fio: pid=1385057, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:14.003 14:21:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:14.003 14:21:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:14.261 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=4997120, buflen=4096 00:19:14.261 fio: pid=1385065, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:14.261 14:21:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:14.261 14:21:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:14.261 00:19:14.261 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1385057: Wed Jul 10 14:21:23 2024 00:19:14.261 read: IOPS=2422, BW=9691KiB/s (9923kB/s)(32.7MiB/3452msec) 00:19:14.261 slat (usec): min=4, max=19078, avg=15.80, stdev=301.56 00:19:14.261 clat (usec): min=303, max=41532, avg=391.05, stdev=1093.46 00:19:14.261 lat (usec): min=309, max=41546, avg=406.85, stdev=1135.13 00:19:14.261 clat percentiles (usec): 00:19:14.261 | 1.00th=[ 314], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 330], 00:19:14.261 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 351], 00:19:14.261 | 70.00th=[ 367], 80.00th=[ 375], 90.00th=[ 392], 95.00th=[ 482], 00:19:14.261 | 99.00th=[ 627], 99.50th=[ 734], 99.90th=[ 1598], 99.95th=[41157], 00:19:14.261 | 99.99th=[41681] 00:19:14.261 bw ( KiB/s): min= 5656, max=11792, per=91.04%, avg=10041.33, stdev=2217.31, samples=6 00:19:14.261 iops : min= 1414, max= 2948, avg=2510.33, stdev=554.33, samples=6 00:19:14.261 lat (usec) : 500=96.50%, 750=3.04%, 1000=0.11% 00:19:14.261 lat (msec) : 2=0.25%, 4=0.02%, 50=0.07% 00:19:14.261 cpu : usr=1.74%, sys=3.54%, ctx=8369, majf=0, minf=1 00:19:14.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.261 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.261 issued rwts: total=8364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.261 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1385065: Wed Jul 10 14:21:23 2024 00:19:14.261 read: IOPS=319, BW=1277KiB/s (1307kB/s)(4880KiB/3822msec) 00:19:14.261 slat (usec): min=4, max=29137, avg=73.91, stdev=1126.38 00:19:14.261 clat (usec): min=343, max=42953, avg=3032.23, stdev=9812.73 00:19:14.261 lat (usec): min=352, max=63029, avg=3106.19, stdev=9969.74 00:19:14.261 clat percentiles (usec): 00:19:14.261 | 1.00th=[ 375], 5.00th=[ 396], 10.00th=[ 408], 20.00th=[ 441], 00:19:14.261 | 30.00th=[ 457], 40.00th=[ 474], 50.00th=[ 490], 60.00th=[ 515], 00:19:14.261 | 70.00th=[ 537], 80.00th=[ 570], 90.00th=[ 660], 95.00th=[41157], 00:19:14.261 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:19:14.261 | 99.99th=[42730] 00:19:14.261 bw ( KiB/s): min= 96, max= 3768, per=11.91%, avg=1313.86, stdev=1588.68, samples=7 00:19:14.261 iops : min= 24, max= 942, avg=328.43, stdev=397.13, samples=7 00:19:14.261 lat (usec) : 500=55.36%, 750=36.53%, 1000=0.16% 00:19:14.261 lat (msec) : 2=1.64%, 50=6.22% 00:19:14.261 cpu : usr=0.16%, sys=0.76%, ctx=1225, majf=0, minf=1 00:19:14.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.261 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.261 issued rwts: total=1221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.261 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1385089: Wed Jul 10 14:21:23 2024 00:19:14.261 read: IOPS=25, BW=99.3KiB/s (102kB/s)(316KiB/3182msec) 00:19:14.261 slat (nsec): min=11631, max=39335, avg=20581.54, stdev=8821.74 00:19:14.261 clat (usec): min=448, max=41468, avg=39961.00, stdev=6372.17 00:19:14.261 lat (usec): min=460, max=41480, avg=39981.41, stdev=6371.26 00:19:14.261 clat percentiles (usec): 00:19:14.261 | 1.00th=[ 449], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:14.261 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:14.261 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:14.261 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:14.261 | 99.99th=[41681] 00:19:14.261 bw ( KiB/s): min= 96, max= 104, per=0.91%, avg=100.00, stdev= 4.38, samples=6 00:19:14.261 iops : min= 24, max= 26, avg=25.00, stdev= 1.10, samples=6 00:19:14.261 lat (usec) : 500=1.25%, 1000=1.25% 00:19:14.261 lat (msec) : 50=96.25% 00:19:14.261 cpu : usr=0.09%, sys=0.00%, ctx=80, majf=0, minf=1 00:19:14.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.261 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.261 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.261 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1385101: Wed Jul 10 14:21:23 2024 00:19:14.261 read: IOPS=298, BW=1193KiB/s (1222kB/s)(3504KiB/2937msec) 00:19:14.261 slat (nsec): min=5162, max=67668, avg=22447.39, stdev=9981.39 00:19:14.261 clat (usec): min=339, max=41367, avg=3298.43, stdev=10224.21 00:19:14.261 lat (usec): min=345, max=41382, avg=3320.88, stdev=10223.59 00:19:14.261 clat percentiles (usec): 00:19:14.261 | 1.00th=[ 351], 5.00th=[ 383], 10.00th=[ 461], 20.00th=[ 490], 00:19:14.261 | 30.00th=[ 502], 40.00th=[ 510], 50.00th=[ 519], 60.00th=[ 529], 00:19:14.261 | 70.00th=[ 545], 80.00th=[ 594], 90.00th=[ 693], 95.00th=[41157], 00:19:14.261 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:14.261 | 99.99th=[41157] 00:19:14.261 bw ( KiB/s): min= 96, max= 3792, per=10.98%, avg=1211.20, stdev=1552.41, samples=5 00:19:14.261 iops : min= 24, max= 948, avg=302.80, stdev=388.10, samples=5 00:19:14.261 lat (usec) : 500=28.73%, 750=63.17%, 1000=0.80% 00:19:14.261 lat (msec) : 2=0.23%, 4=0.11%, 50=6.84% 00:19:14.261 cpu : usr=0.48%, sys=0.58%, ctx=877, majf=0, minf=1 00:19:14.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.261 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.261 issued rwts: total=877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.261 00:19:14.261 Run status group 0 (all jobs): 00:19:14.261 READ: bw=10.8MiB/s (11.3MB/s), 99.3KiB/s-9691KiB/s (102kB/s-9923kB/s), io=41.2MiB (43.2MB), run=2937-3822msec 00:19:14.261 00:19:14.261 Disk stats (read/write): 00:19:14.262 nvme0n1: ios=8146/0, merge=0/0, ticks=3134/0, in_queue=3134, util=94.16% 00:19:14.262 nvme0n2: ios=1215/0, merge=0/0, ticks=3490/0, in_queue=3490, util=94.42% 00:19:14.262 nvme0n3: ios=77/0, merge=0/0, ticks=3077/0, in_queue=3077, util=96.69% 00:19:14.262 nvme0n4: ios=874/0, merge=0/0, ticks=2790/0, in_queue=2790, util=96.73% 00:19:14.519 14:21:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:14.519 14:21:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:14.776 14:21:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:14.776 14:21:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:15.033 14:21:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:15.033 14:21:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:15.597 14:21:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:15.597 14:21:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:15.854 14:21:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:15.854 14:21:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1384969 00:19:15.854 14:21:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:15.854 14:21:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:16.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:16.786 14:21:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:16.786 14:21:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:19:16.786 14:21:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:16.786 14:21:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:16.786 14:21:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:16.786 14:21:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:16.786 14:21:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:19:16.786 14:21:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:16.786 14:21:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:16.786 nvmf hotplug test: fio failed as expected 00:19:16.786 14:21:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:16.786 14:21:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:16.786 14:21:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:16.786 14:21:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:16.786 14:21:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:16.786 14:21:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:16.786 14:21:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:16.786 14:21:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:16.786 14:21:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:16.786 14:21:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:16.786 14:21:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:16.786 14:21:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:16.786 rmmod nvme_tcp 00:19:17.044 rmmod nvme_fabrics 00:19:17.044 rmmod nvme_keyring 00:19:17.044 14:21:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:17.044 14:21:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:17.044 14:21:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:17.044 14:21:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1382422 ']' 00:19:17.044 14:21:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1382422 00:19:17.044 14:21:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1382422 ']' 00:19:17.044 14:21:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1382422 00:19:17.044 14:21:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:19:17.044 14:21:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:17.044 14:21:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1382422 00:19:17.044 14:21:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:17.044 14:21:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:17.044 14:21:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1382422' 00:19:17.044 killing process with pid 1382422 00:19:17.044 14:21:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1382422 00:19:17.044 14:21:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1382422 00:19:18.417 14:21:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:18.417 14:21:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:18.417 14:21:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:18.417 14:21:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:18.417 14:21:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:18.417 14:21:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.417 14:21:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.417 14:21:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.321 14:21:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:20.321 00:19:20.321 real 0m26.576s 00:19:20.321 user 1m29.772s 00:19:20.321 sys 0m7.372s 00:19:20.321 14:21:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:20.321 14:21:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.321 ************************************ 00:19:20.321 END TEST nvmf_fio_target 00:19:20.321 ************************************ 00:19:20.321 14:21:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:20.321 14:21:29 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:20.321 14:21:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:20.321 14:21:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:20.321 14:21:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:20.321 ************************************ 00:19:20.321 START TEST nvmf_bdevio 00:19:20.321 ************************************ 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:20.321 * Looking for test storage... 00:19:20.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:20.321 14:21:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:22.850 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:22.850 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:22.850 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:22.850 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:22.850 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:22.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:19:22.851 00:19:22.851 --- 10.0.0.2 ping statistics --- 00:19:22.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.851 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:22.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:19:22.851 00:19:22.851 --- 10.0.0.1 ping statistics --- 00:19:22.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.851 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1387943 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1387943 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1387943 ']' 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:22.851 14:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:22.851 [2024-07-10 14:21:32.057287] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:19:22.851 [2024-07-10 14:21:32.057458] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.851 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.851 [2024-07-10 14:21:32.189354] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:23.109 [2024-07-10 14:21:32.418767] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.109 [2024-07-10 14:21:32.418843] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.109 [2024-07-10 14:21:32.418868] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.109 [2024-07-10 14:21:32.418885] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.109 [2024-07-10 14:21:32.418902] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.109 [2024-07-10 14:21:32.419263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:23.109 [2024-07-10 14:21:32.419328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:23.109 [2024-07-10 14:21:32.419368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:23.109 [2024-07-10 14:21:32.419394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:23.673 [2024-07-10 14:21:33.031089] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:23.673 Malloc0 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:23.673 [2024-07-10 14:21:33.134131] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:23.673 14:21:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:23.673 { 00:19:23.673 "params": { 00:19:23.673 "name": "Nvme$subsystem", 00:19:23.673 "trtype": "$TEST_TRANSPORT", 00:19:23.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:23.673 "adrfam": "ipv4", 00:19:23.673 "trsvcid": "$NVMF_PORT", 00:19:23.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:23.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:23.673 "hdgst": ${hdgst:-false}, 00:19:23.673 "ddgst": ${ddgst:-false} 00:19:23.673 }, 00:19:23.673 "method": "bdev_nvme_attach_controller" 00:19:23.673 } 00:19:23.673 EOF 00:19:23.673 )") 00:19:23.674 14:21:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:23.674 14:21:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:23.674 14:21:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:23.674 14:21:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:23.674 "params": { 00:19:23.674 "name": "Nvme1", 00:19:23.674 "trtype": "tcp", 00:19:23.674 "traddr": "10.0.0.2", 00:19:23.674 "adrfam": "ipv4", 00:19:23.674 "trsvcid": "4420", 00:19:23.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:23.674 "hdgst": false, 00:19:23.674 "ddgst": false 00:19:23.674 }, 00:19:23.674 "method": "bdev_nvme_attach_controller" 00:19:23.674 }' 00:19:23.930 [2024-07-10 14:21:33.215602] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:19:23.930 [2024-07-10 14:21:33.215741] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1388095 ] 00:19:23.930 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.930 [2024-07-10 14:21:33.340799] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:24.187 [2024-07-10 14:21:33.586886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.187 [2024-07-10 14:21:33.586927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.187 [2024-07-10 14:21:33.586937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.752 I/O targets: 00:19:24.752 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:24.752 00:19:24.752 00:19:24.752 CUnit - A unit testing framework for C - Version 2.1-3 00:19:24.752 http://cunit.sourceforge.net/ 00:19:24.752 00:19:24.752 00:19:24.752 Suite: bdevio tests on: Nvme1n1 00:19:24.752 Test: blockdev write read block ...passed 00:19:24.752 Test: blockdev write zeroes read block ...passed 00:19:24.752 Test: blockdev write zeroes read no split ...passed 00:19:24.752 Test: blockdev write zeroes read split ...passed 00:19:25.008 Test: blockdev write zeroes read split partial ...passed 00:19:25.008 Test: blockdev reset ...[2024-07-10 14:21:34.294379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.008 [2024-07-10 14:21:34.294576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:19:25.008 [2024-07-10 14:21:34.308333] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:25.008 passed 00:19:25.008 Test: blockdev write read 8 blocks ...passed 00:19:25.008 Test: blockdev write read size > 128k ...passed 00:19:25.008 Test: blockdev write read invalid size ...passed 00:19:25.008 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:25.008 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:25.008 Test: blockdev write read max offset ...passed 00:19:25.008 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:25.008 Test: blockdev writev readv 8 blocks ...passed 00:19:25.008 Test: blockdev writev readv 30 x 1block ...passed 00:19:25.266 Test: blockdev writev readv block ...passed 00:19:25.266 Test: blockdev writev readv size > 128k ...passed 00:19:25.266 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:25.266 Test: blockdev comparev and writev ...[2024-07-10 14:21:34.527569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.266 [2024-07-10 14:21:34.527648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.266 [2024-07-10 14:21:34.527687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.266 [2024-07-10 14:21:34.527717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.266 [2024-07-10 14:21:34.528184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.266 [2024-07-10 14:21:34.528218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.266 [2024-07-10 14:21:34.528251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.266 [2024-07-10 14:21:34.528275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:25.266 [2024-07-10 14:21:34.528738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.266 [2024-07-10 14:21:34.528771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.266 [2024-07-10 14:21:34.528804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.266 [2024-07-10 14:21:34.528828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:25.266 [2024-07-10 14:21:34.529277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.266 [2024-07-10 14:21:34.529309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.266 [2024-07-10 14:21:34.529341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.266 [2024-07-10 14:21:34.529365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:25.266 passed 00:19:25.266 Test: blockdev nvme passthru rw ...passed 00:19:25.266 Test: blockdev nvme passthru vendor specific ...[2024-07-10 14:21:34.611979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.266 [2024-07-10 14:21:34.612037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:25.266 [2024-07-10 14:21:34.612303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.266 [2024-07-10 14:21:34.612335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.266 [2024-07-10 14:21:34.612586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.266 [2024-07-10 14:21:34.612618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:25.266 [2024-07-10 14:21:34.612856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.266 [2024-07-10 14:21:34.612895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:25.266 passed 00:19:25.266 Test: blockdev nvme admin passthru ...passed 00:19:25.266 Test: blockdev copy ...passed 00:19:25.266 00:19:25.266 Run Summary: Type Total Ran Passed Failed Inactive 00:19:25.266 suites 1 1 n/a 0 0 00:19:25.266 tests 23 23 23 0 0 00:19:25.266 asserts 152 152 152 0 n/a 00:19:25.266 00:19:25.266 Elapsed time = 1.293 seconds 00:19:26.200 14:21:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:26.200 14:21:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.200 14:21:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:26.200 14:21:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.200 14:21:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:26.200 14:21:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:26.200 14:21:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:26.200 14:21:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:26.200 14:21:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:26.200 14:21:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:26.200 14:21:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:26.200 14:21:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:26.200 rmmod nvme_tcp 00:19:26.200 rmmod nvme_fabrics 00:19:26.458 rmmod nvme_keyring 00:19:26.458 14:21:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:26.458 14:21:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:26.458 14:21:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:26.458 14:21:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1387943 ']' 00:19:26.458 14:21:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1387943 00:19:26.458 14:21:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1387943 ']' 00:19:26.458 14:21:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1387943 00:19:26.458 14:21:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:19:26.458 14:21:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:26.458 14:21:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1387943 00:19:26.458 14:21:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:26.458 14:21:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:26.458 14:21:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1387943' 00:19:26.458 killing process with pid 1387943 00:19:26.458 14:21:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1387943 00:19:26.458 14:21:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1387943 00:19:27.830 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:27.830 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:27.830 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:27.830 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:27.830 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:27.830 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.830 14:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.830 14:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.725 14:21:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:29.725 00:19:29.725 real 0m9.437s 00:19:29.725 user 0m22.678s 00:19:29.725 sys 0m2.402s 00:19:29.725 14:21:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:29.725 14:21:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:29.725 ************************************ 00:19:29.725 END TEST nvmf_bdevio 00:19:29.725 ************************************ 00:19:29.725 14:21:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:29.725 14:21:39 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:29.725 14:21:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:29.725 14:21:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:29.725 14:21:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:29.725 ************************************ 00:19:29.725 START TEST nvmf_auth_target 00:19:29.725 ************************************ 00:19:29.725 14:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:29.982 * Looking for test storage... 00:19:29.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:29.982 14:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:31.931 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:31.931 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:31.931 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:31.931 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:31.931 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:31.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:19:31.932 00:19:31.932 --- 10.0.0.2 ping statistics --- 00:19:31.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.932 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:31.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:19:31.932 00:19:31.932 --- 10.0.0.1 ping statistics --- 00:19:31.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.932 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1390432 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1390432 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1390432 ']' 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:31.932 14:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1390583 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b04da4a349b0dd82fcf2678cc9ccb11f9df3f46c8621fb6e 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Hgo 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b04da4a349b0dd82fcf2678cc9ccb11f9df3f46c8621fb6e 0 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b04da4a349b0dd82fcf2678cc9ccb11f9df3f46c8621fb6e 0 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b04da4a349b0dd82fcf2678cc9ccb11f9df3f46c8621fb6e 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Hgo 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Hgo 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Hgo 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=26e9f6a359f491a6c9630ffc9995c858ec4072f9cad6506806438a200c1de572 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1pb 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 26e9f6a359f491a6c9630ffc9995c858ec4072f9cad6506806438a200c1de572 3 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 26e9f6a359f491a6c9630ffc9995c858ec4072f9cad6506806438a200c1de572 3 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:33.306 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=26e9f6a359f491a6c9630ffc9995c858ec4072f9cad6506806438a200c1de572 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1pb 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1pb 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.1pb 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=23ba042b3735a22bb9b638daabb548d0 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.hWz 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 23ba042b3735a22bb9b638daabb548d0 1 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 23ba042b3735a22bb9b638daabb548d0 1 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=23ba042b3735a22bb9b638daabb548d0 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.hWz 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.hWz 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.hWz 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1e44624b3628ab11b670a5a71ee3c1f8520da35752e195e3 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.JON 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1e44624b3628ab11b670a5a71ee3c1f8520da35752e195e3 2 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1e44624b3628ab11b670a5a71ee3c1f8520da35752e195e3 2 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1e44624b3628ab11b670a5a71ee3c1f8520da35752e195e3 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.JON 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.JON 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.JON 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ed37a4f47b6eed16386c22347e923d4a3f514133162c00a5 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.HRx 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ed37a4f47b6eed16386c22347e923d4a3f514133162c00a5 2 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ed37a4f47b6eed16386c22347e923d4a3f514133162c00a5 2 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ed37a4f47b6eed16386c22347e923d4a3f514133162c00a5 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.HRx 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.HRx 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.HRx 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fd06165f53f4f320b22caccbd78aa607 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.SH1 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fd06165f53f4f320b22caccbd78aa607 1 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fd06165f53f4f320b22caccbd78aa607 1 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fd06165f53f4f320b22caccbd78aa607 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.SH1 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.SH1 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.SH1 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=514d11043e3385e0a246abfe572262e6e37189122d2aa1c55a26908c53ec6cb9 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.XPh 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 514d11043e3385e0a246abfe572262e6e37189122d2aa1c55a26908c53ec6cb9 3 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 514d11043e3385e0a246abfe572262e6e37189122d2aa1c55a26908c53ec6cb9 3 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:33.307 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=514d11043e3385e0a246abfe572262e6e37189122d2aa1c55a26908c53ec6cb9 00:19:33.308 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:33.308 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:33.308 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.XPh 00:19:33.308 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.XPh 00:19:33.308 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.XPh 00:19:33.308 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:33.308 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1390432 00:19:33.308 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1390432 ']' 00:19:33.308 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.308 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:33.308 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.308 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:33.308 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.565 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:33.565 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:33.565 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1390583 /var/tmp/host.sock 00:19:33.565 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1390583 ']' 00:19:33.565 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:19:33.565 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:33.565 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:33.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:33.565 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:33.565 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.499 14:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:34.499 14:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:34.499 14:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:34.499 14:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.499 14:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.499 14:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.499 14:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:34.499 14:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Hgo 00:19:34.499 14:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.499 14:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.499 14:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.499 14:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Hgo 00:19:34.499 14:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Hgo 00:19:34.757 14:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.1pb ]] 00:19:34.757 14:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1pb 00:19:34.757 14:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.757 14:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.757 14:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.757 14:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1pb 00:19:34.757 14:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1pb 00:19:35.015 14:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:35.015 14:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.hWz 00:19:35.015 14:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.015 14:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.015 14:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.015 14:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.hWz 00:19:35.015 14:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.hWz 00:19:35.281 14:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.JON ]] 00:19:35.282 14:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JON 00:19:35.282 14:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.282 14:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.282 14:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.282 14:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JON 00:19:35.282 14:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JON 00:19:35.282 14:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:35.282 14:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.HRx 00:19:35.282 14:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.282 14:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.546 14:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.546 14:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.HRx 00:19:35.546 14:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.HRx 00:19:35.546 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.SH1 ]] 00:19:35.546 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SH1 00:19:35.546 14:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.546 14:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.546 14:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.546 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SH1 00:19:35.546 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SH1 00:19:35.804 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:35.804 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.XPh 00:19:35.804 14:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.804 14:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.804 14:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.804 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.XPh 00:19:35.804 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.XPh 00:19:36.061 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:36.061 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:36.061 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.061 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.061 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:36.061 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:36.319 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:36.319 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.319 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:36.319 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:36.319 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:36.319 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.320 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.320 14:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.320 14:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.320 14:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.320 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.320 14:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.885 00:19:36.885 14:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.885 14:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.885 14:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.885 14:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.885 14:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.885 14:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.885 14:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.885 14:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.885 14:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.885 { 00:19:36.885 "cntlid": 1, 00:19:36.885 "qid": 0, 00:19:36.885 "state": "enabled", 00:19:36.885 "thread": "nvmf_tgt_poll_group_000", 00:19:36.885 "listen_address": { 00:19:36.885 "trtype": "TCP", 00:19:36.885 "adrfam": "IPv4", 00:19:36.885 "traddr": "10.0.0.2", 00:19:36.885 "trsvcid": "4420" 00:19:36.885 }, 00:19:36.885 "peer_address": { 00:19:36.885 "trtype": "TCP", 00:19:36.885 "adrfam": "IPv4", 00:19:36.885 "traddr": "10.0.0.1", 00:19:36.885 "trsvcid": "49712" 00:19:36.885 }, 00:19:36.885 "auth": { 00:19:36.885 "state": "completed", 00:19:36.885 "digest": "sha256", 00:19:36.885 "dhgroup": "null" 00:19:36.885 } 00:19:36.885 } 00:19:36.885 ]' 00:19:36.885 14:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.143 14:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.143 14:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.143 14:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:37.143 14:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.143 14:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.143 14:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.143 14:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.401 14:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjA0ZGE0YTM0OWIwZGQ4MmZjZjI2NzhjYzljY2IxMWY5ZGYzZjQ2Yzg2MjFmYjZlFp19Nw==: --dhchap-ctrl-secret DHHC-1:03:MjZlOWY2YTM1OWY0OTFhNmM5NjMwZmZjOTk5NWM4NThlYzQwNzJmOWNhZDY1MDY4MDY0MzhhMjAwYzFkZTU3Mhggswg=: 00:19:38.334 14:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.334 14:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.334 14:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.334 14:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.334 14:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.334 14:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.334 14:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:38.334 14:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:38.591 14:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:38.591 14:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.591 14:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:38.591 14:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:38.591 14:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:38.591 14:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.591 14:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.591 14:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.591 14:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.591 14:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.591 14:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.591 14:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.848 00:19:38.848 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.848 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.849 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.106 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.106 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.106 14:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.106 14:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.106 14:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.106 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.106 { 00:19:39.106 "cntlid": 3, 00:19:39.106 "qid": 0, 00:19:39.106 "state": "enabled", 00:19:39.106 "thread": "nvmf_tgt_poll_group_000", 00:19:39.106 "listen_address": { 00:19:39.106 "trtype": "TCP", 00:19:39.106 "adrfam": "IPv4", 00:19:39.106 "traddr": "10.0.0.2", 00:19:39.106 "trsvcid": "4420" 00:19:39.106 }, 00:19:39.106 "peer_address": { 00:19:39.106 "trtype": "TCP", 00:19:39.106 "adrfam": "IPv4", 00:19:39.106 "traddr": "10.0.0.1", 00:19:39.106 "trsvcid": "49736" 00:19:39.106 }, 00:19:39.106 "auth": { 00:19:39.106 "state": "completed", 00:19:39.106 "digest": "sha256", 00:19:39.106 "dhgroup": "null" 00:19:39.106 } 00:19:39.106 } 00:19:39.106 ]' 00:19:39.106 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.106 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.107 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.107 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:39.107 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.364 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.364 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.364 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.622 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjNiYTA0MmIzNzM1YTIyYmI5YjYzOGRhYWJiNTQ4ZDCTSXJ2: --dhchap-ctrl-secret DHHC-1:02:MWU0NDYyNGIzNjI4YWIxMWI2NzBhNWE3MWVlM2MxZjg1MjBkYTM1NzUyZTE5NWUzcFigkQ==: 00:19:40.555 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.555 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.555 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.555 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.555 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.555 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.555 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:40.555 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:40.813 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:40.813 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.813 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:40.813 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:40.813 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:40.813 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.813 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.813 14:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.813 14:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.813 14:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.813 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.813 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.071 00:19:41.071 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.071 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.071 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.330 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.330 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.330 14:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.330 14:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.330 14:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.330 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.330 { 00:19:41.330 "cntlid": 5, 00:19:41.330 "qid": 0, 00:19:41.330 "state": "enabled", 00:19:41.330 "thread": "nvmf_tgt_poll_group_000", 00:19:41.330 "listen_address": { 00:19:41.330 "trtype": "TCP", 00:19:41.330 "adrfam": "IPv4", 00:19:41.330 "traddr": "10.0.0.2", 00:19:41.330 "trsvcid": "4420" 00:19:41.330 }, 00:19:41.330 "peer_address": { 00:19:41.330 "trtype": "TCP", 00:19:41.330 "adrfam": "IPv4", 00:19:41.330 "traddr": "10.0.0.1", 00:19:41.330 "trsvcid": "49768" 00:19:41.330 }, 00:19:41.330 "auth": { 00:19:41.330 "state": "completed", 00:19:41.330 "digest": "sha256", 00:19:41.330 "dhgroup": "null" 00:19:41.330 } 00:19:41.330 } 00:19:41.330 ]' 00:19:41.330 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.330 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.330 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.330 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:41.330 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.330 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.330 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.330 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.588 14:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWQzN2E0ZjQ3YjZlZWQxNjM4NmMyMjM0N2U5MjNkNGEzZjUxNDEzMzE2MmMwMGE1LClXzg==: --dhchap-ctrl-secret DHHC-1:01:ZmQwNjE2NWY1M2Y0ZjMyMGIyMmNhY2NiZDc4YWE2MDcwRJK2: 00:19:42.961 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.961 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.961 14:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.961 14:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.961 14:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.961 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.961 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:42.961 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:42.961 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:42.961 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.961 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:42.961 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:42.961 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:42.961 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.961 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:42.961 14:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.961 14:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.961 14:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.961 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.961 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.219 00:19:43.219 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.219 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.219 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.477 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.477 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.477 14:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.477 14:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.477 14:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.477 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.477 { 00:19:43.477 "cntlid": 7, 00:19:43.477 "qid": 0, 00:19:43.477 "state": "enabled", 00:19:43.477 "thread": "nvmf_tgt_poll_group_000", 00:19:43.477 "listen_address": { 00:19:43.478 "trtype": "TCP", 00:19:43.478 "adrfam": "IPv4", 00:19:43.478 "traddr": "10.0.0.2", 00:19:43.478 "trsvcid": "4420" 00:19:43.478 }, 00:19:43.478 "peer_address": { 00:19:43.478 "trtype": "TCP", 00:19:43.478 "adrfam": "IPv4", 00:19:43.478 "traddr": "10.0.0.1", 00:19:43.478 "trsvcid": "49796" 00:19:43.478 }, 00:19:43.478 "auth": { 00:19:43.478 "state": "completed", 00:19:43.478 "digest": "sha256", 00:19:43.478 "dhgroup": "null" 00:19:43.478 } 00:19:43.478 } 00:19:43.478 ]' 00:19:43.478 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.478 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.478 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.734 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:43.734 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.734 14:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.734 14:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.734 14:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.990 14:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTE0ZDExMDQzZTMzODVlMGEyNDZhYmZlNTcyMjYyZTZlMzcxODkxMjJkMmFhMWM1NWEyNjkwOGM1M2VjNmNiOZZjMxE=: 00:19:44.919 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.919 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.919 14:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.919 14:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.919 14:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.919 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.919 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.919 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:44.919 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:45.176 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:45.176 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.176 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:45.176 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:45.176 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:45.176 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.176 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.176 14:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.176 14:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.176 14:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.176 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.176 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.432 00:19:45.432 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.432 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.432 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.690 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.690 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.690 14:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.690 14:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.690 14:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.690 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.690 { 00:19:45.690 "cntlid": 9, 00:19:45.690 "qid": 0, 00:19:45.690 "state": "enabled", 00:19:45.690 "thread": "nvmf_tgt_poll_group_000", 00:19:45.690 "listen_address": { 00:19:45.690 "trtype": "TCP", 00:19:45.690 "adrfam": "IPv4", 00:19:45.690 "traddr": "10.0.0.2", 00:19:45.690 "trsvcid": "4420" 00:19:45.690 }, 00:19:45.690 "peer_address": { 00:19:45.690 "trtype": "TCP", 00:19:45.690 "adrfam": "IPv4", 00:19:45.690 "traddr": "10.0.0.1", 00:19:45.690 "trsvcid": "38176" 00:19:45.690 }, 00:19:45.690 "auth": { 00:19:45.690 "state": "completed", 00:19:45.690 "digest": "sha256", 00:19:45.690 "dhgroup": "ffdhe2048" 00:19:45.690 } 00:19:45.690 } 00:19:45.690 ]' 00:19:45.690 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.690 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.690 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.690 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:45.690 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.947 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.947 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.947 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.205 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjA0ZGE0YTM0OWIwZGQ4MmZjZjI2NzhjYzljY2IxMWY5ZGYzZjQ2Yzg2MjFmYjZlFp19Nw==: --dhchap-ctrl-secret DHHC-1:03:MjZlOWY2YTM1OWY0OTFhNmM5NjMwZmZjOTk5NWM4NThlYzQwNzJmOWNhZDY1MDY4MDY0MzhhMjAwYzFkZTU3Mhggswg=: 00:19:47.137 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.137 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.137 14:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.137 14:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.137 14:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.137 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.137 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:47.137 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:47.137 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:47.137 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.137 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:47.137 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:47.137 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:47.137 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.137 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.137 14:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.137 14:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.137 14:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.137 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.137 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.702 00:19:47.702 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.702 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.702 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.959 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.959 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.959 14:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.959 14:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.959 14:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.959 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.959 { 00:19:47.959 "cntlid": 11, 00:19:47.959 "qid": 0, 00:19:47.959 "state": "enabled", 00:19:47.959 "thread": "nvmf_tgt_poll_group_000", 00:19:47.959 "listen_address": { 00:19:47.959 "trtype": "TCP", 00:19:47.959 "adrfam": "IPv4", 00:19:47.959 "traddr": "10.0.0.2", 00:19:47.959 "trsvcid": "4420" 00:19:47.959 }, 00:19:47.959 "peer_address": { 00:19:47.959 "trtype": "TCP", 00:19:47.959 "adrfam": "IPv4", 00:19:47.959 "traddr": "10.0.0.1", 00:19:47.959 "trsvcid": "38200" 00:19:47.959 }, 00:19:47.959 "auth": { 00:19:47.959 "state": "completed", 00:19:47.959 "digest": "sha256", 00:19:47.959 "dhgroup": "ffdhe2048" 00:19:47.959 } 00:19:47.959 } 00:19:47.959 ]' 00:19:47.959 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.959 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.959 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.959 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:47.959 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.959 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.959 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.959 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.216 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjNiYTA0MmIzNzM1YTIyYmI5YjYzOGRhYWJiNTQ4ZDCTSXJ2: --dhchap-ctrl-secret DHHC-1:02:MWU0NDYyNGIzNjI4YWIxMWI2NzBhNWE3MWVlM2MxZjg1MjBkYTM1NzUyZTE5NWUzcFigkQ==: 00:19:49.149 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.149 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.149 14:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.149 14:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.149 14:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.149 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.149 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:49.149 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:49.406 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:49.406 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.406 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:49.406 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:49.406 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:49.406 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.406 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.406 14:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.406 14:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.406 14:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.406 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.406 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.664 00:19:49.664 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.664 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.664 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.921 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.921 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.921 14:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.921 14:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.921 14:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.921 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.921 { 00:19:49.921 "cntlid": 13, 00:19:49.921 "qid": 0, 00:19:49.921 "state": "enabled", 00:19:49.921 "thread": "nvmf_tgt_poll_group_000", 00:19:49.921 "listen_address": { 00:19:49.921 "trtype": "TCP", 00:19:49.921 "adrfam": "IPv4", 00:19:49.921 "traddr": "10.0.0.2", 00:19:49.921 "trsvcid": "4420" 00:19:49.921 }, 00:19:49.921 "peer_address": { 00:19:49.921 "trtype": "TCP", 00:19:49.921 "adrfam": "IPv4", 00:19:49.921 "traddr": "10.0.0.1", 00:19:49.921 "trsvcid": "38228" 00:19:49.921 }, 00:19:49.921 "auth": { 00:19:49.921 "state": "completed", 00:19:49.921 "digest": "sha256", 00:19:49.921 "dhgroup": "ffdhe2048" 00:19:49.921 } 00:19:49.921 } 00:19:49.921 ]' 00:19:49.921 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.179 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.179 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.179 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:50.179 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.179 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.179 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.179 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.436 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWQzN2E0ZjQ3YjZlZWQxNjM4NmMyMjM0N2U5MjNkNGEzZjUxNDEzMzE2MmMwMGE1LClXzg==: --dhchap-ctrl-secret DHHC-1:01:ZmQwNjE2NWY1M2Y0ZjMyMGIyMmNhY2NiZDc4YWE2MDcwRJK2: 00:19:51.368 14:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.368 14:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.368 14:22:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.368 14:22:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.368 14:22:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.368 14:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.368 14:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:51.368 14:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:51.625 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:51.625 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.625 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:51.625 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:51.625 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:51.625 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.625 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:51.625 14:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.625 14:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.625 14:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.625 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.625 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.189 00:19:52.189 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.189 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.189 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.189 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.189 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.189 14:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.189 14:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.189 14:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.189 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.189 { 00:19:52.189 "cntlid": 15, 00:19:52.189 "qid": 0, 00:19:52.189 "state": "enabled", 00:19:52.189 "thread": "nvmf_tgt_poll_group_000", 00:19:52.189 "listen_address": { 00:19:52.189 "trtype": "TCP", 00:19:52.189 "adrfam": "IPv4", 00:19:52.189 "traddr": "10.0.0.2", 00:19:52.189 "trsvcid": "4420" 00:19:52.189 }, 00:19:52.189 "peer_address": { 00:19:52.189 "trtype": "TCP", 00:19:52.189 "adrfam": "IPv4", 00:19:52.189 "traddr": "10.0.0.1", 00:19:52.189 "trsvcid": "38248" 00:19:52.189 }, 00:19:52.189 "auth": { 00:19:52.189 "state": "completed", 00:19:52.189 "digest": "sha256", 00:19:52.189 "dhgroup": "ffdhe2048" 00:19:52.189 } 00:19:52.189 } 00:19:52.189 ]' 00:19:52.189 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.446 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.446 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.446 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:52.446 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.446 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.446 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.446 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.703 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTE0ZDExMDQzZTMzODVlMGEyNDZhYmZlNTcyMjYyZTZlMzcxODkxMjJkMmFhMWM1NWEyNjkwOGM1M2VjNmNiOZZjMxE=: 00:19:53.655 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.655 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.655 14:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.655 14:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.655 14:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.655 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.655 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.655 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.655 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.917 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:53.917 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.917 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:53.917 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:53.917 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:53.917 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.917 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.917 14:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.917 14:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.917 14:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.917 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.917 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.483 00:19:54.483 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.483 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.483 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.483 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.483 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.483 14:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.483 14:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.483 14:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.483 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.483 { 00:19:54.483 "cntlid": 17, 00:19:54.483 "qid": 0, 00:19:54.483 "state": "enabled", 00:19:54.483 "thread": "nvmf_tgt_poll_group_000", 00:19:54.483 "listen_address": { 00:19:54.483 "trtype": "TCP", 00:19:54.483 "adrfam": "IPv4", 00:19:54.483 "traddr": "10.0.0.2", 00:19:54.483 "trsvcid": "4420" 00:19:54.483 }, 00:19:54.483 "peer_address": { 00:19:54.483 "trtype": "TCP", 00:19:54.483 "adrfam": "IPv4", 00:19:54.483 "traddr": "10.0.0.1", 00:19:54.483 "trsvcid": "46512" 00:19:54.483 }, 00:19:54.483 "auth": { 00:19:54.483 "state": "completed", 00:19:54.483 "digest": "sha256", 00:19:54.483 "dhgroup": "ffdhe3072" 00:19:54.483 } 00:19:54.483 } 00:19:54.483 ]' 00:19:54.483 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.741 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.741 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.741 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:54.741 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.741 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.741 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.741 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.999 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjA0ZGE0YTM0OWIwZGQ4MmZjZjI2NzhjYzljY2IxMWY5ZGYzZjQ2Yzg2MjFmYjZlFp19Nw==: --dhchap-ctrl-secret DHHC-1:03:MjZlOWY2YTM1OWY0OTFhNmM5NjMwZmZjOTk5NWM4NThlYzQwNzJmOWNhZDY1MDY4MDY0MzhhMjAwYzFkZTU3Mhggswg=: 00:19:55.933 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.933 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.933 14:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.933 14:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.933 14:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.933 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.933 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:55.933 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:56.191 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:56.191 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.191 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:56.191 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:56.191 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:56.191 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.191 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.191 14:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.191 14:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.191 14:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.191 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.191 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.449 00:19:56.449 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.449 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.449 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.707 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.707 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.707 14:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.707 14:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.707 14:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.707 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.707 { 00:19:56.707 "cntlid": 19, 00:19:56.707 "qid": 0, 00:19:56.707 "state": "enabled", 00:19:56.707 "thread": "nvmf_tgt_poll_group_000", 00:19:56.707 "listen_address": { 00:19:56.707 "trtype": "TCP", 00:19:56.707 "adrfam": "IPv4", 00:19:56.707 "traddr": "10.0.0.2", 00:19:56.707 "trsvcid": "4420" 00:19:56.707 }, 00:19:56.707 "peer_address": { 00:19:56.707 "trtype": "TCP", 00:19:56.707 "adrfam": "IPv4", 00:19:56.707 "traddr": "10.0.0.1", 00:19:56.707 "trsvcid": "46534" 00:19:56.707 }, 00:19:56.707 "auth": { 00:19:56.707 "state": "completed", 00:19:56.707 "digest": "sha256", 00:19:56.707 "dhgroup": "ffdhe3072" 00:19:56.707 } 00:19:56.707 } 00:19:56.707 ]' 00:19:56.707 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.965 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.965 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.965 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:56.965 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.965 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.965 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.965 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.223 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjNiYTA0MmIzNzM1YTIyYmI5YjYzOGRhYWJiNTQ4ZDCTSXJ2: --dhchap-ctrl-secret DHHC-1:02:MWU0NDYyNGIzNjI4YWIxMWI2NzBhNWE3MWVlM2MxZjg1MjBkYTM1NzUyZTE5NWUzcFigkQ==: 00:19:58.256 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.256 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.256 14:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.256 14:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.256 14:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.256 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.256 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:58.256 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:58.514 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:58.514 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.514 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:58.514 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:58.514 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:58.514 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.514 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.514 14:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.514 14:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.514 14:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.514 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.514 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.771 00:19:58.771 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.771 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.771 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.029 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.029 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.029 14:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.029 14:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.029 14:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.029 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.029 { 00:19:59.029 "cntlid": 21, 00:19:59.029 "qid": 0, 00:19:59.029 "state": "enabled", 00:19:59.029 "thread": "nvmf_tgt_poll_group_000", 00:19:59.029 "listen_address": { 00:19:59.029 "trtype": "TCP", 00:19:59.029 "adrfam": "IPv4", 00:19:59.029 "traddr": "10.0.0.2", 00:19:59.029 "trsvcid": "4420" 00:19:59.029 }, 00:19:59.029 "peer_address": { 00:19:59.029 "trtype": "TCP", 00:19:59.029 "adrfam": "IPv4", 00:19:59.029 "traddr": "10.0.0.1", 00:19:59.029 "trsvcid": "46554" 00:19:59.030 }, 00:19:59.030 "auth": { 00:19:59.030 "state": "completed", 00:19:59.030 "digest": "sha256", 00:19:59.030 "dhgroup": "ffdhe3072" 00:19:59.030 } 00:19:59.030 } 00:19:59.030 ]' 00:19:59.030 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.288 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.288 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.288 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:59.288 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.288 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.288 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.288 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.545 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWQzN2E0ZjQ3YjZlZWQxNjM4NmMyMjM0N2U5MjNkNGEzZjUxNDEzMzE2MmMwMGE1LClXzg==: --dhchap-ctrl-secret DHHC-1:01:ZmQwNjE2NWY1M2Y0ZjMyMGIyMmNhY2NiZDc4YWE2MDcwRJK2: 00:20:00.479 14:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.479 14:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.479 14:22:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.479 14:22:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.479 14:22:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.479 14:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.479 14:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:00.479 14:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:00.737 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:20:00.737 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.737 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:00.737 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:00.737 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:00.737 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.737 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:00.737 14:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.737 14:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.737 14:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.737 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.737 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.995 00:20:01.252 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.252 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.252 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.252 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.252 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.252 14:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.252 14:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.509 14:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.509 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.509 { 00:20:01.509 "cntlid": 23, 00:20:01.509 "qid": 0, 00:20:01.509 "state": "enabled", 00:20:01.509 "thread": "nvmf_tgt_poll_group_000", 00:20:01.509 "listen_address": { 00:20:01.509 "trtype": "TCP", 00:20:01.509 "adrfam": "IPv4", 00:20:01.509 "traddr": "10.0.0.2", 00:20:01.509 "trsvcid": "4420" 00:20:01.509 }, 00:20:01.509 "peer_address": { 00:20:01.509 "trtype": "TCP", 00:20:01.509 "adrfam": "IPv4", 00:20:01.509 "traddr": "10.0.0.1", 00:20:01.509 "trsvcid": "46570" 00:20:01.509 }, 00:20:01.509 "auth": { 00:20:01.509 "state": "completed", 00:20:01.509 "digest": "sha256", 00:20:01.509 "dhgroup": "ffdhe3072" 00:20:01.509 } 00:20:01.509 } 00:20:01.509 ]' 00:20:01.509 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.509 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.509 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.509 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:01.509 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.509 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.510 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.510 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.767 14:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTE0ZDExMDQzZTMzODVlMGEyNDZhYmZlNTcyMjYyZTZlMzcxODkxMjJkMmFhMWM1NWEyNjkwOGM1M2VjNmNiOZZjMxE=: 00:20:02.701 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.701 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.701 14:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.701 14:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.701 14:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.701 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.701 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.701 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:02.701 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:02.959 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:20:02.959 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.959 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:02.959 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:02.959 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:02.959 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.959 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.959 14:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.959 14:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.959 14:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.959 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.959 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.217 00:20:03.475 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.475 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.475 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.475 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.475 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.475 14:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.475 14:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.733 14:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.733 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.733 { 00:20:03.733 "cntlid": 25, 00:20:03.733 "qid": 0, 00:20:03.733 "state": "enabled", 00:20:03.733 "thread": "nvmf_tgt_poll_group_000", 00:20:03.733 "listen_address": { 00:20:03.733 "trtype": "TCP", 00:20:03.733 "adrfam": "IPv4", 00:20:03.733 "traddr": "10.0.0.2", 00:20:03.733 "trsvcid": "4420" 00:20:03.733 }, 00:20:03.733 "peer_address": { 00:20:03.733 "trtype": "TCP", 00:20:03.733 "adrfam": "IPv4", 00:20:03.733 "traddr": "10.0.0.1", 00:20:03.733 "trsvcid": "46612" 00:20:03.733 }, 00:20:03.733 "auth": { 00:20:03.733 "state": "completed", 00:20:03.733 "digest": "sha256", 00:20:03.733 "dhgroup": "ffdhe4096" 00:20:03.733 } 00:20:03.733 } 00:20:03.733 ]' 00:20:03.733 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.733 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.733 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.733 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:03.733 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.733 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.733 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.733 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.990 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjA0ZGE0YTM0OWIwZGQ4MmZjZjI2NzhjYzljY2IxMWY5ZGYzZjQ2Yzg2MjFmYjZlFp19Nw==: --dhchap-ctrl-secret DHHC-1:03:MjZlOWY2YTM1OWY0OTFhNmM5NjMwZmZjOTk5NWM4NThlYzQwNzJmOWNhZDY1MDY4MDY0MzhhMjAwYzFkZTU3Mhggswg=: 00:20:04.923 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.923 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.923 14:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.923 14:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.923 14:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.923 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.923 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:04.923 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:05.181 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:20:05.181 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.181 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:05.181 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:05.181 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:05.181 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.181 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.181 14:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.181 14:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.181 14:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.181 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.181 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.746 00:20:05.746 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.746 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.746 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.004 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.004 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.004 14:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.004 14:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.004 14:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.004 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.004 { 00:20:06.004 "cntlid": 27, 00:20:06.004 "qid": 0, 00:20:06.004 "state": "enabled", 00:20:06.004 "thread": "nvmf_tgt_poll_group_000", 00:20:06.004 "listen_address": { 00:20:06.004 "trtype": "TCP", 00:20:06.004 "adrfam": "IPv4", 00:20:06.004 "traddr": "10.0.0.2", 00:20:06.004 "trsvcid": "4420" 00:20:06.004 }, 00:20:06.004 "peer_address": { 00:20:06.004 "trtype": "TCP", 00:20:06.004 "adrfam": "IPv4", 00:20:06.004 "traddr": "10.0.0.1", 00:20:06.004 "trsvcid": "37028" 00:20:06.004 }, 00:20:06.004 "auth": { 00:20:06.004 "state": "completed", 00:20:06.004 "digest": "sha256", 00:20:06.004 "dhgroup": "ffdhe4096" 00:20:06.004 } 00:20:06.004 } 00:20:06.004 ]' 00:20:06.004 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.004 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.004 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.004 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:06.004 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.004 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.004 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.004 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.262 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjNiYTA0MmIzNzM1YTIyYmI5YjYzOGRhYWJiNTQ4ZDCTSXJ2: --dhchap-ctrl-secret DHHC-1:02:MWU0NDYyNGIzNjI4YWIxMWI2NzBhNWE3MWVlM2MxZjg1MjBkYTM1NzUyZTE5NWUzcFigkQ==: 00:20:07.194 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.194 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.194 14:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.194 14:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.194 14:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.194 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.194 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:07.194 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:07.452 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:20:07.452 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.452 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:07.452 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:07.452 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:07.452 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.452 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.452 14:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.452 14:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.452 14:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.452 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.452 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.017 00:20:08.017 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.017 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.017 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.275 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.275 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.275 14:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.275 14:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.275 14:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.275 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.275 { 00:20:08.275 "cntlid": 29, 00:20:08.275 "qid": 0, 00:20:08.275 "state": "enabled", 00:20:08.275 "thread": "nvmf_tgt_poll_group_000", 00:20:08.275 "listen_address": { 00:20:08.275 "trtype": "TCP", 00:20:08.275 "adrfam": "IPv4", 00:20:08.275 "traddr": "10.0.0.2", 00:20:08.275 "trsvcid": "4420" 00:20:08.275 }, 00:20:08.275 "peer_address": { 00:20:08.275 "trtype": "TCP", 00:20:08.275 "adrfam": "IPv4", 00:20:08.275 "traddr": "10.0.0.1", 00:20:08.275 "trsvcid": "37048" 00:20:08.275 }, 00:20:08.275 "auth": { 00:20:08.275 "state": "completed", 00:20:08.275 "digest": "sha256", 00:20:08.275 "dhgroup": "ffdhe4096" 00:20:08.275 } 00:20:08.275 } 00:20:08.275 ]' 00:20:08.275 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.275 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.275 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.275 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:08.275 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.275 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.275 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.275 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.533 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWQzN2E0ZjQ3YjZlZWQxNjM4NmMyMjM0N2U5MjNkNGEzZjUxNDEzMzE2MmMwMGE1LClXzg==: --dhchap-ctrl-secret DHHC-1:01:ZmQwNjE2NWY1M2Y0ZjMyMGIyMmNhY2NiZDc4YWE2MDcwRJK2: 00:20:09.468 14:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.468 14:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.468 14:22:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.468 14:22:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.468 14:22:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.468 14:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.468 14:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:09.468 14:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:09.726 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:20:09.726 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.726 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:09.726 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:09.726 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:09.726 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.726 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:09.726 14:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.726 14:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.726 14:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.726 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.726 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.291 00:20:10.291 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.291 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.291 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.549 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.549 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.549 14:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.549 14:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.549 14:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.549 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.549 { 00:20:10.549 "cntlid": 31, 00:20:10.549 "qid": 0, 00:20:10.549 "state": "enabled", 00:20:10.549 "thread": "nvmf_tgt_poll_group_000", 00:20:10.549 "listen_address": { 00:20:10.549 "trtype": "TCP", 00:20:10.549 "adrfam": "IPv4", 00:20:10.549 "traddr": "10.0.0.2", 00:20:10.549 "trsvcid": "4420" 00:20:10.549 }, 00:20:10.549 "peer_address": { 00:20:10.549 "trtype": "TCP", 00:20:10.549 "adrfam": "IPv4", 00:20:10.549 "traddr": "10.0.0.1", 00:20:10.549 "trsvcid": "37070" 00:20:10.549 }, 00:20:10.549 "auth": { 00:20:10.549 "state": "completed", 00:20:10.549 "digest": "sha256", 00:20:10.549 "dhgroup": "ffdhe4096" 00:20:10.549 } 00:20:10.549 } 00:20:10.549 ]' 00:20:10.549 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.549 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.549 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.549 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:10.549 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.549 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.549 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.549 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.807 14:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTE0ZDExMDQzZTMzODVlMGEyNDZhYmZlNTcyMjYyZTZlMzcxODkxMjJkMmFhMWM1NWEyNjkwOGM1M2VjNmNiOZZjMxE=: 00:20:11.740 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.740 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.740 14:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.740 14:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.740 14:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.740 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.740 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.740 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:11.740 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:11.998 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:11.998 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.998 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:11.998 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:11.998 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:11.998 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.998 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.998 14:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.998 14:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.998 14:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.998 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.999 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.564 00:20:12.564 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.564 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.564 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.822 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.822 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.822 14:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.822 14:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.822 14:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.822 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.822 { 00:20:12.822 "cntlid": 33, 00:20:12.822 "qid": 0, 00:20:12.822 "state": "enabled", 00:20:12.822 "thread": "nvmf_tgt_poll_group_000", 00:20:12.822 "listen_address": { 00:20:12.822 "trtype": "TCP", 00:20:12.822 "adrfam": "IPv4", 00:20:12.822 "traddr": "10.0.0.2", 00:20:12.822 "trsvcid": "4420" 00:20:12.822 }, 00:20:12.822 "peer_address": { 00:20:12.822 "trtype": "TCP", 00:20:12.822 "adrfam": "IPv4", 00:20:12.822 "traddr": "10.0.0.1", 00:20:12.822 "trsvcid": "37082" 00:20:12.822 }, 00:20:12.822 "auth": { 00:20:12.822 "state": "completed", 00:20:12.822 "digest": "sha256", 00:20:12.822 "dhgroup": "ffdhe6144" 00:20:12.822 } 00:20:12.822 } 00:20:12.822 ]' 00:20:12.822 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.822 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.822 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.079 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:13.079 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.079 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.079 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.079 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.337 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjA0ZGE0YTM0OWIwZGQ4MmZjZjI2NzhjYzljY2IxMWY5ZGYzZjQ2Yzg2MjFmYjZlFp19Nw==: --dhchap-ctrl-secret DHHC-1:03:MjZlOWY2YTM1OWY0OTFhNmM5NjMwZmZjOTk5NWM4NThlYzQwNzJmOWNhZDY1MDY4MDY0MzhhMjAwYzFkZTU3Mhggswg=: 00:20:14.267 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.267 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.267 14:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.267 14:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.267 14:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.267 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.267 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:14.267 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:14.524 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:14.524 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.524 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:14.524 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:14.524 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:14.524 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.524 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.524 14:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.524 14:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.524 14:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.524 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.524 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.174 00:20:15.174 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.174 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.174 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.432 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.432 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.432 14:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.432 14:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.432 14:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.432 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.432 { 00:20:15.432 "cntlid": 35, 00:20:15.432 "qid": 0, 00:20:15.432 "state": "enabled", 00:20:15.432 "thread": "nvmf_tgt_poll_group_000", 00:20:15.432 "listen_address": { 00:20:15.432 "trtype": "TCP", 00:20:15.432 "adrfam": "IPv4", 00:20:15.432 "traddr": "10.0.0.2", 00:20:15.432 "trsvcid": "4420" 00:20:15.432 }, 00:20:15.432 "peer_address": { 00:20:15.432 "trtype": "TCP", 00:20:15.432 "adrfam": "IPv4", 00:20:15.432 "traddr": "10.0.0.1", 00:20:15.432 "trsvcid": "48256" 00:20:15.432 }, 00:20:15.432 "auth": { 00:20:15.432 "state": "completed", 00:20:15.432 "digest": "sha256", 00:20:15.432 "dhgroup": "ffdhe6144" 00:20:15.432 } 00:20:15.432 } 00:20:15.432 ]' 00:20:15.432 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.432 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.432 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.432 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:15.432 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.432 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.432 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.432 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.690 14:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjNiYTA0MmIzNzM1YTIyYmI5YjYzOGRhYWJiNTQ4ZDCTSXJ2: --dhchap-ctrl-secret DHHC-1:02:MWU0NDYyNGIzNjI4YWIxMWI2NzBhNWE3MWVlM2MxZjg1MjBkYTM1NzUyZTE5NWUzcFigkQ==: 00:20:16.624 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.624 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.624 14:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.624 14:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.624 14:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.624 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.624 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:16.624 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:16.881 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:16.881 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.881 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:16.881 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:16.881 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:16.881 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.881 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.881 14:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.881 14:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.881 14:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.881 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.881 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.445 00:20:17.445 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.445 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.445 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.702 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.702 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.702 14:22:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.702 14:22:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.702 14:22:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.702 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.702 { 00:20:17.702 "cntlid": 37, 00:20:17.702 "qid": 0, 00:20:17.702 "state": "enabled", 00:20:17.702 "thread": "nvmf_tgt_poll_group_000", 00:20:17.702 "listen_address": { 00:20:17.702 "trtype": "TCP", 00:20:17.702 "adrfam": "IPv4", 00:20:17.702 "traddr": "10.0.0.2", 00:20:17.702 "trsvcid": "4420" 00:20:17.702 }, 00:20:17.702 "peer_address": { 00:20:17.702 "trtype": "TCP", 00:20:17.702 "adrfam": "IPv4", 00:20:17.702 "traddr": "10.0.0.1", 00:20:17.702 "trsvcid": "48266" 00:20:17.702 }, 00:20:17.702 "auth": { 00:20:17.702 "state": "completed", 00:20:17.702 "digest": "sha256", 00:20:17.702 "dhgroup": "ffdhe6144" 00:20:17.702 } 00:20:17.702 } 00:20:17.702 ]' 00:20:17.702 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.702 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.702 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.958 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:17.958 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.958 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.958 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.958 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.215 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWQzN2E0ZjQ3YjZlZWQxNjM4NmMyMjM0N2U5MjNkNGEzZjUxNDEzMzE2MmMwMGE1LClXzg==: --dhchap-ctrl-secret DHHC-1:01:ZmQwNjE2NWY1M2Y0ZjMyMGIyMmNhY2NiZDc4YWE2MDcwRJK2: 00:20:19.143 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.143 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.143 14:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.143 14:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.143 14:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.143 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.143 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:19.143 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:19.399 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:19.399 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.399 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:19.399 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:19.399 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:19.399 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.399 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:19.399 14:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.399 14:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.399 14:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.399 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:19.399 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:19.962 00:20:19.962 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.962 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.962 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.219 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.219 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.219 14:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.219 14:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.219 14:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.219 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.219 { 00:20:20.219 "cntlid": 39, 00:20:20.219 "qid": 0, 00:20:20.219 "state": "enabled", 00:20:20.219 "thread": "nvmf_tgt_poll_group_000", 00:20:20.219 "listen_address": { 00:20:20.219 "trtype": "TCP", 00:20:20.219 "adrfam": "IPv4", 00:20:20.219 "traddr": "10.0.0.2", 00:20:20.219 "trsvcid": "4420" 00:20:20.219 }, 00:20:20.219 "peer_address": { 00:20:20.219 "trtype": "TCP", 00:20:20.219 "adrfam": "IPv4", 00:20:20.219 "traddr": "10.0.0.1", 00:20:20.219 "trsvcid": "48294" 00:20:20.219 }, 00:20:20.219 "auth": { 00:20:20.219 "state": "completed", 00:20:20.219 "digest": "sha256", 00:20:20.219 "dhgroup": "ffdhe6144" 00:20:20.219 } 00:20:20.219 } 00:20:20.219 ]' 00:20:20.219 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.219 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.219 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.219 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:20.219 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.476 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.476 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.476 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.733 14:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTE0ZDExMDQzZTMzODVlMGEyNDZhYmZlNTcyMjYyZTZlMzcxODkxMjJkMmFhMWM1NWEyNjkwOGM1M2VjNmNiOZZjMxE=: 00:20:21.665 14:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.665 14:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.665 14:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.665 14:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.665 14:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.665 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:21.665 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.665 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:21.665 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:21.923 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:21.923 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.923 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:21.923 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:21.923 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:21.923 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.923 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.923 14:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.923 14:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.923 14:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.923 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.923 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.855 00:20:22.855 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.855 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.855 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.112 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.112 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.112 14:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.112 14:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.112 14:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.112 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.112 { 00:20:23.112 "cntlid": 41, 00:20:23.112 "qid": 0, 00:20:23.112 "state": "enabled", 00:20:23.112 "thread": "nvmf_tgt_poll_group_000", 00:20:23.112 "listen_address": { 00:20:23.112 "trtype": "TCP", 00:20:23.112 "adrfam": "IPv4", 00:20:23.112 "traddr": "10.0.0.2", 00:20:23.112 "trsvcid": "4420" 00:20:23.112 }, 00:20:23.112 "peer_address": { 00:20:23.112 "trtype": "TCP", 00:20:23.112 "adrfam": "IPv4", 00:20:23.112 "traddr": "10.0.0.1", 00:20:23.112 "trsvcid": "48314" 00:20:23.112 }, 00:20:23.112 "auth": { 00:20:23.112 "state": "completed", 00:20:23.112 "digest": "sha256", 00:20:23.112 "dhgroup": "ffdhe8192" 00:20:23.112 } 00:20:23.112 } 00:20:23.112 ]' 00:20:23.113 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.113 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.113 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.113 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:23.113 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.113 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.113 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.113 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.370 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjA0ZGE0YTM0OWIwZGQ4MmZjZjI2NzhjYzljY2IxMWY5ZGYzZjQ2Yzg2MjFmYjZlFp19Nw==: --dhchap-ctrl-secret DHHC-1:03:MjZlOWY2YTM1OWY0OTFhNmM5NjMwZmZjOTk5NWM4NThlYzQwNzJmOWNhZDY1MDY4MDY0MzhhMjAwYzFkZTU3Mhggswg=: 00:20:24.300 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.300 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.300 14:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.300 14:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.558 14:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.558 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.558 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:24.558 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:24.558 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:24.558 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.558 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:24.558 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:24.558 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:24.558 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.558 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.558 14:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.558 14:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.816 14:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.816 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.816 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.750 00:20:25.750 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.750 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.750 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.750 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.750 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.750 14:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.750 14:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.750 14:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.750 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.750 { 00:20:25.750 "cntlid": 43, 00:20:25.750 "qid": 0, 00:20:25.750 "state": "enabled", 00:20:25.750 "thread": "nvmf_tgt_poll_group_000", 00:20:25.750 "listen_address": { 00:20:25.750 "trtype": "TCP", 00:20:25.750 "adrfam": "IPv4", 00:20:25.750 "traddr": "10.0.0.2", 00:20:25.750 "trsvcid": "4420" 00:20:25.750 }, 00:20:25.750 "peer_address": { 00:20:25.750 "trtype": "TCP", 00:20:25.750 "adrfam": "IPv4", 00:20:25.750 "traddr": "10.0.0.1", 00:20:25.750 "trsvcid": "46970" 00:20:25.750 }, 00:20:25.750 "auth": { 00:20:25.750 "state": "completed", 00:20:25.750 "digest": "sha256", 00:20:25.750 "dhgroup": "ffdhe8192" 00:20:25.750 } 00:20:25.750 } 00:20:25.750 ]' 00:20:25.750 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.750 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.008 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.008 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:26.008 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.008 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.008 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.008 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.266 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjNiYTA0MmIzNzM1YTIyYmI5YjYzOGRhYWJiNTQ4ZDCTSXJ2: --dhchap-ctrl-secret DHHC-1:02:MWU0NDYyNGIzNjI4YWIxMWI2NzBhNWE3MWVlM2MxZjg1MjBkYTM1NzUyZTE5NWUzcFigkQ==: 00:20:27.200 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.200 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.200 14:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.200 14:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.200 14:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.200 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.200 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:27.200 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:27.458 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:27.458 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.458 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:27.458 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:27.458 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:27.458 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.458 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.458 14:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.458 14:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.458 14:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.458 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.458 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.392 00:20:28.392 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.392 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.392 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.649 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.649 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.649 14:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.649 14:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.649 14:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.649 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.649 { 00:20:28.649 "cntlid": 45, 00:20:28.649 "qid": 0, 00:20:28.649 "state": "enabled", 00:20:28.649 "thread": "nvmf_tgt_poll_group_000", 00:20:28.649 "listen_address": { 00:20:28.649 "trtype": "TCP", 00:20:28.649 "adrfam": "IPv4", 00:20:28.649 "traddr": "10.0.0.2", 00:20:28.649 "trsvcid": "4420" 00:20:28.649 }, 00:20:28.649 "peer_address": { 00:20:28.649 "trtype": "TCP", 00:20:28.649 "adrfam": "IPv4", 00:20:28.649 "traddr": "10.0.0.1", 00:20:28.649 "trsvcid": "46986" 00:20:28.649 }, 00:20:28.649 "auth": { 00:20:28.649 "state": "completed", 00:20:28.649 "digest": "sha256", 00:20:28.649 "dhgroup": "ffdhe8192" 00:20:28.649 } 00:20:28.649 } 00:20:28.649 ]' 00:20:28.649 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.649 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.649 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.649 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:28.649 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.649 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.649 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.649 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.907 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWQzN2E0ZjQ3YjZlZWQxNjM4NmMyMjM0N2U5MjNkNGEzZjUxNDEzMzE2MmMwMGE1LClXzg==: --dhchap-ctrl-secret DHHC-1:01:ZmQwNjE2NWY1M2Y0ZjMyMGIyMmNhY2NiZDc4YWE2MDcwRJK2: 00:20:29.839 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.839 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.839 14:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.839 14:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.839 14:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.839 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.839 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:29.839 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:30.097 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:30.097 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.097 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:30.097 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:30.097 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:30.097 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.097 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:30.097 14:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.097 14:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.355 14:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.355 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:30.355 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:31.289 00:20:31.289 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.289 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.289 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.289 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.289 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.289 14:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.289 14:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.289 14:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.289 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.289 { 00:20:31.289 "cntlid": 47, 00:20:31.289 "qid": 0, 00:20:31.289 "state": "enabled", 00:20:31.289 "thread": "nvmf_tgt_poll_group_000", 00:20:31.289 "listen_address": { 00:20:31.289 "trtype": "TCP", 00:20:31.289 "adrfam": "IPv4", 00:20:31.289 "traddr": "10.0.0.2", 00:20:31.289 "trsvcid": "4420" 00:20:31.289 }, 00:20:31.289 "peer_address": { 00:20:31.289 "trtype": "TCP", 00:20:31.289 "adrfam": "IPv4", 00:20:31.289 "traddr": "10.0.0.1", 00:20:31.289 "trsvcid": "47012" 00:20:31.289 }, 00:20:31.289 "auth": { 00:20:31.289 "state": "completed", 00:20:31.289 "digest": "sha256", 00:20:31.289 "dhgroup": "ffdhe8192" 00:20:31.289 } 00:20:31.289 } 00:20:31.289 ]' 00:20:31.289 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.289 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.289 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.547 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:31.547 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.547 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.547 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.547 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.804 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTE0ZDExMDQzZTMzODVlMGEyNDZhYmZlNTcyMjYyZTZlMzcxODkxMjJkMmFhMWM1NWEyNjkwOGM1M2VjNmNiOZZjMxE=: 00:20:32.737 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.737 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.737 14:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.737 14:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.737 14:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.737 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:32.737 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:32.737 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.737 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:32.737 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:32.995 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:32.995 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.995 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:32.995 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:32.995 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:32.995 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.995 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.995 14:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.995 14:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.995 14:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.995 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.995 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.253 00:20:33.253 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.253 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.253 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.511 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.511 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.511 14:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.511 14:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.511 14:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.511 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.511 { 00:20:33.511 "cntlid": 49, 00:20:33.511 "qid": 0, 00:20:33.511 "state": "enabled", 00:20:33.511 "thread": "nvmf_tgt_poll_group_000", 00:20:33.511 "listen_address": { 00:20:33.511 "trtype": "TCP", 00:20:33.511 "adrfam": "IPv4", 00:20:33.511 "traddr": "10.0.0.2", 00:20:33.511 "trsvcid": "4420" 00:20:33.511 }, 00:20:33.511 "peer_address": { 00:20:33.511 "trtype": "TCP", 00:20:33.511 "adrfam": "IPv4", 00:20:33.511 "traddr": "10.0.0.1", 00:20:33.511 "trsvcid": "47038" 00:20:33.511 }, 00:20:33.511 "auth": { 00:20:33.511 "state": "completed", 00:20:33.511 "digest": "sha384", 00:20:33.511 "dhgroup": "null" 00:20:33.511 } 00:20:33.511 } 00:20:33.511 ]' 00:20:33.511 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.511 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.511 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.769 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:33.769 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.769 14:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.769 14:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.769 14:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.027 14:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjA0ZGE0YTM0OWIwZGQ4MmZjZjI2NzhjYzljY2IxMWY5ZGYzZjQ2Yzg2MjFmYjZlFp19Nw==: --dhchap-ctrl-secret DHHC-1:03:MjZlOWY2YTM1OWY0OTFhNmM5NjMwZmZjOTk5NWM4NThlYzQwNzJmOWNhZDY1MDY4MDY0MzhhMjAwYzFkZTU3Mhggswg=: 00:20:34.960 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.960 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.960 14:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.960 14:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.960 14:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.960 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.960 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:34.960 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:35.218 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:35.218 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.218 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:35.218 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:35.218 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:35.218 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.218 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.218 14:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.218 14:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.218 14:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.218 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.218 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.476 00:20:35.476 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.476 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.476 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.734 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.734 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.734 14:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.734 14:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.734 14:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.734 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.734 { 00:20:35.734 "cntlid": 51, 00:20:35.734 "qid": 0, 00:20:35.734 "state": "enabled", 00:20:35.734 "thread": "nvmf_tgt_poll_group_000", 00:20:35.734 "listen_address": { 00:20:35.734 "trtype": "TCP", 00:20:35.734 "adrfam": "IPv4", 00:20:35.734 "traddr": "10.0.0.2", 00:20:35.734 "trsvcid": "4420" 00:20:35.734 }, 00:20:35.734 "peer_address": { 00:20:35.734 "trtype": "TCP", 00:20:35.734 "adrfam": "IPv4", 00:20:35.734 "traddr": "10.0.0.1", 00:20:35.734 "trsvcid": "58838" 00:20:35.734 }, 00:20:35.734 "auth": { 00:20:35.734 "state": "completed", 00:20:35.734 "digest": "sha384", 00:20:35.734 "dhgroup": "null" 00:20:35.734 } 00:20:35.734 } 00:20:35.734 ]' 00:20:35.734 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.734 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.734 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.734 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:35.734 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.734 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.734 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.734 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.992 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjNiYTA0MmIzNzM1YTIyYmI5YjYzOGRhYWJiNTQ4ZDCTSXJ2: --dhchap-ctrl-secret DHHC-1:02:MWU0NDYyNGIzNjI4YWIxMWI2NzBhNWE3MWVlM2MxZjg1MjBkYTM1NzUyZTE5NWUzcFigkQ==: 00:20:36.925 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.183 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.183 14:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.183 14:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.183 14:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.183 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.183 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:37.183 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:37.441 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:37.441 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.441 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:37.441 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:37.441 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:37.441 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.441 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.441 14:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.441 14:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.441 14:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.441 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.441 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.698 00:20:37.698 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.698 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.698 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.956 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.956 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.956 14:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.956 14:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.956 14:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.956 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.956 { 00:20:37.956 "cntlid": 53, 00:20:37.956 "qid": 0, 00:20:37.956 "state": "enabled", 00:20:37.956 "thread": "nvmf_tgt_poll_group_000", 00:20:37.956 "listen_address": { 00:20:37.956 "trtype": "TCP", 00:20:37.956 "adrfam": "IPv4", 00:20:37.956 "traddr": "10.0.0.2", 00:20:37.956 "trsvcid": "4420" 00:20:37.956 }, 00:20:37.956 "peer_address": { 00:20:37.956 "trtype": "TCP", 00:20:37.956 "adrfam": "IPv4", 00:20:37.956 "traddr": "10.0.0.1", 00:20:37.956 "trsvcid": "58874" 00:20:37.956 }, 00:20:37.956 "auth": { 00:20:37.956 "state": "completed", 00:20:37.956 "digest": "sha384", 00:20:37.956 "dhgroup": "null" 00:20:37.956 } 00:20:37.956 } 00:20:37.956 ]' 00:20:37.956 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.956 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.956 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.956 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:37.956 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.956 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.957 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.957 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.215 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWQzN2E0ZjQ3YjZlZWQxNjM4NmMyMjM0N2U5MjNkNGEzZjUxNDEzMzE2MmMwMGE1LClXzg==: --dhchap-ctrl-secret DHHC-1:01:ZmQwNjE2NWY1M2Y0ZjMyMGIyMmNhY2NiZDc4YWE2MDcwRJK2: 00:20:39.148 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.148 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.148 14:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.148 14:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.148 14:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.148 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.148 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:39.148 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:39.406 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:39.406 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.406 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:39.406 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:39.406 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:39.406 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.406 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:39.406 14:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.406 14:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.406 14:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.406 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.406 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.664 00:20:39.664 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.664 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.664 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.922 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.922 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.922 14:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.922 14:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.922 14:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.922 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.922 { 00:20:39.922 "cntlid": 55, 00:20:39.922 "qid": 0, 00:20:39.922 "state": "enabled", 00:20:39.922 "thread": "nvmf_tgt_poll_group_000", 00:20:39.922 "listen_address": { 00:20:39.922 "trtype": "TCP", 00:20:39.922 "adrfam": "IPv4", 00:20:39.922 "traddr": "10.0.0.2", 00:20:39.922 "trsvcid": "4420" 00:20:39.922 }, 00:20:39.922 "peer_address": { 00:20:39.922 "trtype": "TCP", 00:20:39.922 "adrfam": "IPv4", 00:20:39.922 "traddr": "10.0.0.1", 00:20:39.922 "trsvcid": "58908" 00:20:39.922 }, 00:20:39.922 "auth": { 00:20:39.922 "state": "completed", 00:20:39.922 "digest": "sha384", 00:20:39.922 "dhgroup": "null" 00:20:39.922 } 00:20:39.922 } 00:20:39.922 ]' 00:20:39.922 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.180 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.180 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.180 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:40.180 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.180 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.180 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.180 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.438 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTE0ZDExMDQzZTMzODVlMGEyNDZhYmZlNTcyMjYyZTZlMzcxODkxMjJkMmFhMWM1NWEyNjkwOGM1M2VjNmNiOZZjMxE=: 00:20:41.371 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.371 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.371 14:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.371 14:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.372 14:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.372 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.372 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.372 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:41.372 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:41.629 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:41.629 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.629 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:41.629 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:41.629 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:41.629 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.629 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.629 14:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.629 14:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.629 14:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.629 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.629 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.887 00:20:41.887 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.887 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.887 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.144 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.144 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.144 14:22:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.144 14:22:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.144 14:22:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.144 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.144 { 00:20:42.144 "cntlid": 57, 00:20:42.144 "qid": 0, 00:20:42.144 "state": "enabled", 00:20:42.144 "thread": "nvmf_tgt_poll_group_000", 00:20:42.144 "listen_address": { 00:20:42.144 "trtype": "TCP", 00:20:42.144 "adrfam": "IPv4", 00:20:42.144 "traddr": "10.0.0.2", 00:20:42.144 "trsvcid": "4420" 00:20:42.144 }, 00:20:42.144 "peer_address": { 00:20:42.144 "trtype": "TCP", 00:20:42.144 "adrfam": "IPv4", 00:20:42.144 "traddr": "10.0.0.1", 00:20:42.144 "trsvcid": "58930" 00:20:42.144 }, 00:20:42.144 "auth": { 00:20:42.144 "state": "completed", 00:20:42.144 "digest": "sha384", 00:20:42.144 "dhgroup": "ffdhe2048" 00:20:42.144 } 00:20:42.144 } 00:20:42.144 ]' 00:20:42.144 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.144 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.144 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.401 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:42.401 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.401 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.401 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.401 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.659 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjA0ZGE0YTM0OWIwZGQ4MmZjZjI2NzhjYzljY2IxMWY5ZGYzZjQ2Yzg2MjFmYjZlFp19Nw==: --dhchap-ctrl-secret DHHC-1:03:MjZlOWY2YTM1OWY0OTFhNmM5NjMwZmZjOTk5NWM4NThlYzQwNzJmOWNhZDY1MDY4MDY0MzhhMjAwYzFkZTU3Mhggswg=: 00:20:43.592 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.592 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.592 14:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.592 14:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.592 14:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.592 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.592 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:43.592 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:43.850 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:43.850 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.850 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:43.850 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:43.850 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:43.850 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.850 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.850 14:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.850 14:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.850 14:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.850 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.850 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.107 00:20:44.107 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.107 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.107 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.364 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.364 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.364 14:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.364 14:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.364 14:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.364 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.364 { 00:20:44.364 "cntlid": 59, 00:20:44.364 "qid": 0, 00:20:44.364 "state": "enabled", 00:20:44.364 "thread": "nvmf_tgt_poll_group_000", 00:20:44.364 "listen_address": { 00:20:44.364 "trtype": "TCP", 00:20:44.364 "adrfam": "IPv4", 00:20:44.364 "traddr": "10.0.0.2", 00:20:44.364 "trsvcid": "4420" 00:20:44.364 }, 00:20:44.364 "peer_address": { 00:20:44.364 "trtype": "TCP", 00:20:44.364 "adrfam": "IPv4", 00:20:44.364 "traddr": "10.0.0.1", 00:20:44.364 "trsvcid": "35504" 00:20:44.364 }, 00:20:44.364 "auth": { 00:20:44.364 "state": "completed", 00:20:44.364 "digest": "sha384", 00:20:44.364 "dhgroup": "ffdhe2048" 00:20:44.364 } 00:20:44.364 } 00:20:44.364 ]' 00:20:44.364 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.364 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.364 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.621 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:44.621 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.621 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.621 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.621 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.878 14:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjNiYTA0MmIzNzM1YTIyYmI5YjYzOGRhYWJiNTQ4ZDCTSXJ2: --dhchap-ctrl-secret DHHC-1:02:MWU0NDYyNGIzNjI4YWIxMWI2NzBhNWE3MWVlM2MxZjg1MjBkYTM1NzUyZTE5NWUzcFigkQ==: 00:20:45.809 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.809 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.809 14:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.809 14:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.809 14:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.809 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.809 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:45.809 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:46.066 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:46.066 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.066 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:46.066 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:46.066 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:46.066 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.066 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.066 14:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.066 14:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.066 14:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.066 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.066 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.324 00:20:46.324 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.324 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.324 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.582 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.582 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.582 14:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.582 14:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.582 14:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.582 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.582 { 00:20:46.582 "cntlid": 61, 00:20:46.582 "qid": 0, 00:20:46.582 "state": "enabled", 00:20:46.582 "thread": "nvmf_tgt_poll_group_000", 00:20:46.582 "listen_address": { 00:20:46.582 "trtype": "TCP", 00:20:46.582 "adrfam": "IPv4", 00:20:46.582 "traddr": "10.0.0.2", 00:20:46.582 "trsvcid": "4420" 00:20:46.582 }, 00:20:46.582 "peer_address": { 00:20:46.582 "trtype": "TCP", 00:20:46.582 "adrfam": "IPv4", 00:20:46.582 "traddr": "10.0.0.1", 00:20:46.582 "trsvcid": "35530" 00:20:46.582 }, 00:20:46.582 "auth": { 00:20:46.582 "state": "completed", 00:20:46.582 "digest": "sha384", 00:20:46.582 "dhgroup": "ffdhe2048" 00:20:46.582 } 00:20:46.582 } 00:20:46.582 ]' 00:20:46.582 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.582 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.582 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.582 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:46.582 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.582 14:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.582 14:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.582 14:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.839 14:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWQzN2E0ZjQ3YjZlZWQxNjM4NmMyMjM0N2U5MjNkNGEzZjUxNDEzMzE2MmMwMGE1LClXzg==: --dhchap-ctrl-secret DHHC-1:01:ZmQwNjE2NWY1M2Y0ZjMyMGIyMmNhY2NiZDc4YWE2MDcwRJK2: 00:20:47.773 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.773 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.773 14:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.773 14:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.773 14:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.773 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.773 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:47.773 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:48.339 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:48.339 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.339 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:48.339 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:48.339 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:48.339 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.339 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:48.339 14:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.339 14:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.339 14:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.339 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:48.339 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:48.597 00:20:48.597 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.597 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.597 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.883 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.883 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.883 14:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.883 14:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.883 14:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.883 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.883 { 00:20:48.883 "cntlid": 63, 00:20:48.883 "qid": 0, 00:20:48.883 "state": "enabled", 00:20:48.883 "thread": "nvmf_tgt_poll_group_000", 00:20:48.883 "listen_address": { 00:20:48.883 "trtype": "TCP", 00:20:48.883 "adrfam": "IPv4", 00:20:48.883 "traddr": "10.0.0.2", 00:20:48.883 "trsvcid": "4420" 00:20:48.883 }, 00:20:48.883 "peer_address": { 00:20:48.883 "trtype": "TCP", 00:20:48.883 "adrfam": "IPv4", 00:20:48.883 "traddr": "10.0.0.1", 00:20:48.883 "trsvcid": "35560" 00:20:48.883 }, 00:20:48.883 "auth": { 00:20:48.883 "state": "completed", 00:20:48.883 "digest": "sha384", 00:20:48.883 "dhgroup": "ffdhe2048" 00:20:48.883 } 00:20:48.883 } 00:20:48.883 ]' 00:20:48.883 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.883 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.883 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.883 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:48.883 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.883 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.883 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.883 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.173 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTE0ZDExMDQzZTMzODVlMGEyNDZhYmZlNTcyMjYyZTZlMzcxODkxMjJkMmFhMWM1NWEyNjkwOGM1M2VjNmNiOZZjMxE=: 00:20:50.132 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.133 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.133 14:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.133 14:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.133 14:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.133 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.133 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.133 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:50.133 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:50.390 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:50.390 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.390 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:50.390 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:50.390 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:50.390 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.390 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.390 14:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.390 14:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.390 14:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.390 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.390 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.648 00:20:50.648 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.648 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.648 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.906 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.906 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.906 14:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.906 14:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.906 14:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.906 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.906 { 00:20:50.906 "cntlid": 65, 00:20:50.906 "qid": 0, 00:20:50.906 "state": "enabled", 00:20:50.906 "thread": "nvmf_tgt_poll_group_000", 00:20:50.906 "listen_address": { 00:20:50.906 "trtype": "TCP", 00:20:50.906 "adrfam": "IPv4", 00:20:50.906 "traddr": "10.0.0.2", 00:20:50.906 "trsvcid": "4420" 00:20:50.906 }, 00:20:50.906 "peer_address": { 00:20:50.906 "trtype": "TCP", 00:20:50.906 "adrfam": "IPv4", 00:20:50.906 "traddr": "10.0.0.1", 00:20:50.906 "trsvcid": "35586" 00:20:50.906 }, 00:20:50.906 "auth": { 00:20:50.906 "state": "completed", 00:20:50.906 "digest": "sha384", 00:20:50.906 "dhgroup": "ffdhe3072" 00:20:50.906 } 00:20:50.906 } 00:20:50.906 ]' 00:20:50.906 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.906 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.906 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.166 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:51.166 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.166 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.166 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.166 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.424 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjA0ZGE0YTM0OWIwZGQ4MmZjZjI2NzhjYzljY2IxMWY5ZGYzZjQ2Yzg2MjFmYjZlFp19Nw==: --dhchap-ctrl-secret DHHC-1:03:MjZlOWY2YTM1OWY0OTFhNmM5NjMwZmZjOTk5NWM4NThlYzQwNzJmOWNhZDY1MDY4MDY0MzhhMjAwYzFkZTU3Mhggswg=: 00:20:52.357 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.357 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.357 14:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.357 14:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.357 14:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.357 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.357 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:52.357 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:52.614 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:52.614 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.614 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:52.614 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:52.614 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:52.614 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.614 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.614 14:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.614 14:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.614 14:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.614 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.614 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.871 00:20:52.871 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.871 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.871 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.128 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.128 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.128 14:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.128 14:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.128 14:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.128 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.128 { 00:20:53.128 "cntlid": 67, 00:20:53.128 "qid": 0, 00:20:53.128 "state": "enabled", 00:20:53.128 "thread": "nvmf_tgt_poll_group_000", 00:20:53.128 "listen_address": { 00:20:53.128 "trtype": "TCP", 00:20:53.128 "adrfam": "IPv4", 00:20:53.128 "traddr": "10.0.0.2", 00:20:53.128 "trsvcid": "4420" 00:20:53.128 }, 00:20:53.128 "peer_address": { 00:20:53.128 "trtype": "TCP", 00:20:53.129 "adrfam": "IPv4", 00:20:53.129 "traddr": "10.0.0.1", 00:20:53.129 "trsvcid": "35626" 00:20:53.129 }, 00:20:53.129 "auth": { 00:20:53.129 "state": "completed", 00:20:53.129 "digest": "sha384", 00:20:53.129 "dhgroup": "ffdhe3072" 00:20:53.129 } 00:20:53.129 } 00:20:53.129 ]' 00:20:53.129 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.129 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.129 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.129 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:53.129 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.386 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.386 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.386 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.644 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjNiYTA0MmIzNzM1YTIyYmI5YjYzOGRhYWJiNTQ4ZDCTSXJ2: --dhchap-ctrl-secret DHHC-1:02:MWU0NDYyNGIzNjI4YWIxMWI2NzBhNWE3MWVlM2MxZjg1MjBkYTM1NzUyZTE5NWUzcFigkQ==: 00:20:54.574 14:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.574 14:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.574 14:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.574 14:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.574 14:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.574 14:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.574 14:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:54.574 14:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:54.831 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:54.831 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.831 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:54.831 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:54.831 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:54.831 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.831 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.831 14:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.831 14:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.831 14:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.831 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.831 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.089 00:20:55.089 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.089 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.089 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.346 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.346 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.346 14:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.346 14:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.346 14:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.346 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.346 { 00:20:55.346 "cntlid": 69, 00:20:55.346 "qid": 0, 00:20:55.346 "state": "enabled", 00:20:55.346 "thread": "nvmf_tgt_poll_group_000", 00:20:55.346 "listen_address": { 00:20:55.346 "trtype": "TCP", 00:20:55.346 "adrfam": "IPv4", 00:20:55.346 "traddr": "10.0.0.2", 00:20:55.346 "trsvcid": "4420" 00:20:55.346 }, 00:20:55.346 "peer_address": { 00:20:55.346 "trtype": "TCP", 00:20:55.346 "adrfam": "IPv4", 00:20:55.346 "traddr": "10.0.0.1", 00:20:55.346 "trsvcid": "52126" 00:20:55.346 }, 00:20:55.346 "auth": { 00:20:55.346 "state": "completed", 00:20:55.346 "digest": "sha384", 00:20:55.346 "dhgroup": "ffdhe3072" 00:20:55.346 } 00:20:55.346 } 00:20:55.346 ]' 00:20:55.346 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.346 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.346 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.346 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:55.346 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.603 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.603 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.603 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.603 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWQzN2E0ZjQ3YjZlZWQxNjM4NmMyMjM0N2U5MjNkNGEzZjUxNDEzMzE2MmMwMGE1LClXzg==: --dhchap-ctrl-secret DHHC-1:01:ZmQwNjE2NWY1M2Y0ZjMyMGIyMmNhY2NiZDc4YWE2MDcwRJK2: 00:20:56.974 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.974 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.974 14:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.974 14:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.974 14:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.975 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.975 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:56.975 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:56.975 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:56.975 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.975 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:56.975 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:56.975 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:56.975 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.975 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:56.975 14:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.975 14:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.975 14:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.975 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.975 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.539 00:20:57.540 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.540 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.540 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.540 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.540 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.540 14:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.540 14:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.540 14:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.540 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.540 { 00:20:57.540 "cntlid": 71, 00:20:57.540 "qid": 0, 00:20:57.540 "state": "enabled", 00:20:57.540 "thread": "nvmf_tgt_poll_group_000", 00:20:57.540 "listen_address": { 00:20:57.540 "trtype": "TCP", 00:20:57.540 "adrfam": "IPv4", 00:20:57.540 "traddr": "10.0.0.2", 00:20:57.540 "trsvcid": "4420" 00:20:57.540 }, 00:20:57.540 "peer_address": { 00:20:57.540 "trtype": "TCP", 00:20:57.540 "adrfam": "IPv4", 00:20:57.540 "traddr": "10.0.0.1", 00:20:57.540 "trsvcid": "52158" 00:20:57.540 }, 00:20:57.540 "auth": { 00:20:57.540 "state": "completed", 00:20:57.540 "digest": "sha384", 00:20:57.540 "dhgroup": "ffdhe3072" 00:20:57.540 } 00:20:57.540 } 00:20:57.540 ]' 00:20:57.540 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.797 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.797 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.797 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:57.797 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.797 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.797 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.797 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.054 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTE0ZDExMDQzZTMzODVlMGEyNDZhYmZlNTcyMjYyZTZlMzcxODkxMjJkMmFhMWM1NWEyNjkwOGM1M2VjNmNiOZZjMxE=: 00:20:58.987 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.987 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.987 14:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.987 14:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.987 14:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.987 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.987 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.987 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:58.987 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:59.245 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:59.245 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.245 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:59.245 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:59.245 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:59.245 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.245 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.245 14:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.245 14:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.245 14:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.245 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.245 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.811 00:20:59.811 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.811 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.811 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.069 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.069 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.069 14:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.069 14:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.069 14:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.069 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.069 { 00:21:00.069 "cntlid": 73, 00:21:00.069 "qid": 0, 00:21:00.069 "state": "enabled", 00:21:00.069 "thread": "nvmf_tgt_poll_group_000", 00:21:00.069 "listen_address": { 00:21:00.069 "trtype": "TCP", 00:21:00.069 "adrfam": "IPv4", 00:21:00.069 "traddr": "10.0.0.2", 00:21:00.069 "trsvcid": "4420" 00:21:00.069 }, 00:21:00.069 "peer_address": { 00:21:00.069 "trtype": "TCP", 00:21:00.069 "adrfam": "IPv4", 00:21:00.069 "traddr": "10.0.0.1", 00:21:00.069 "trsvcid": "52190" 00:21:00.069 }, 00:21:00.069 "auth": { 00:21:00.069 "state": "completed", 00:21:00.069 "digest": "sha384", 00:21:00.069 "dhgroup": "ffdhe4096" 00:21:00.069 } 00:21:00.069 } 00:21:00.069 ]' 00:21:00.069 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.069 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.069 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.069 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:00.069 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.069 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.069 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.069 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.326 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjA0ZGE0YTM0OWIwZGQ4MmZjZjI2NzhjYzljY2IxMWY5ZGYzZjQ2Yzg2MjFmYjZlFp19Nw==: --dhchap-ctrl-secret DHHC-1:03:MjZlOWY2YTM1OWY0OTFhNmM5NjMwZmZjOTk5NWM4NThlYzQwNzJmOWNhZDY1MDY4MDY0MzhhMjAwYzFkZTU3Mhggswg=: 00:21:01.257 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.257 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.257 14:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.257 14:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.257 14:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.257 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.257 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:01.257 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:01.515 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:21:01.515 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.515 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:01.515 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:01.515 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:01.515 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.515 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.515 14:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.515 14:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.515 14:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.515 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.515 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.079 00:21:02.079 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.079 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.079 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.079 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.079 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.079 14:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.079 14:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.079 14:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.079 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.079 { 00:21:02.079 "cntlid": 75, 00:21:02.079 "qid": 0, 00:21:02.079 "state": "enabled", 00:21:02.079 "thread": "nvmf_tgt_poll_group_000", 00:21:02.079 "listen_address": { 00:21:02.079 "trtype": "TCP", 00:21:02.079 "adrfam": "IPv4", 00:21:02.079 "traddr": "10.0.0.2", 00:21:02.079 "trsvcid": "4420" 00:21:02.079 }, 00:21:02.079 "peer_address": { 00:21:02.079 "trtype": "TCP", 00:21:02.079 "adrfam": "IPv4", 00:21:02.079 "traddr": "10.0.0.1", 00:21:02.079 "trsvcid": "52208" 00:21:02.079 }, 00:21:02.079 "auth": { 00:21:02.079 "state": "completed", 00:21:02.079 "digest": "sha384", 00:21:02.079 "dhgroup": "ffdhe4096" 00:21:02.079 } 00:21:02.079 } 00:21:02.079 ]' 00:21:02.079 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.336 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.336 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.336 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:02.336 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.336 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.336 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.336 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.593 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjNiYTA0MmIzNzM1YTIyYmI5YjYzOGRhYWJiNTQ4ZDCTSXJ2: --dhchap-ctrl-secret DHHC-1:02:MWU0NDYyNGIzNjI4YWIxMWI2NzBhNWE3MWVlM2MxZjg1MjBkYTM1NzUyZTE5NWUzcFigkQ==: 00:21:03.524 14:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.524 14:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.524 14:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.524 14:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.524 14:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.524 14:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.524 14:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:03.524 14:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:03.780 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:21:03.780 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.780 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:03.780 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:03.780 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:03.780 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.780 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.780 14:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.780 14:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.780 14:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.780 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.780 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.344 00:21:04.344 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.344 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.344 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.601 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.601 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.601 14:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.601 14:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.601 14:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.601 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.601 { 00:21:04.601 "cntlid": 77, 00:21:04.601 "qid": 0, 00:21:04.601 "state": "enabled", 00:21:04.601 "thread": "nvmf_tgt_poll_group_000", 00:21:04.601 "listen_address": { 00:21:04.601 "trtype": "TCP", 00:21:04.601 "adrfam": "IPv4", 00:21:04.601 "traddr": "10.0.0.2", 00:21:04.601 "trsvcid": "4420" 00:21:04.601 }, 00:21:04.601 "peer_address": { 00:21:04.601 "trtype": "TCP", 00:21:04.601 "adrfam": "IPv4", 00:21:04.601 "traddr": "10.0.0.1", 00:21:04.601 "trsvcid": "33624" 00:21:04.601 }, 00:21:04.601 "auth": { 00:21:04.601 "state": "completed", 00:21:04.601 "digest": "sha384", 00:21:04.601 "dhgroup": "ffdhe4096" 00:21:04.601 } 00:21:04.601 } 00:21:04.601 ]' 00:21:04.601 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.601 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.601 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.601 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:04.602 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.602 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.602 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.602 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.859 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWQzN2E0ZjQ3YjZlZWQxNjM4NmMyMjM0N2U5MjNkNGEzZjUxNDEzMzE2MmMwMGE1LClXzg==: --dhchap-ctrl-secret DHHC-1:01:ZmQwNjE2NWY1M2Y0ZjMyMGIyMmNhY2NiZDc4YWE2MDcwRJK2: 00:21:05.791 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.791 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.791 14:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.791 14:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.791 14:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.791 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.791 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:05.791 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:06.048 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:21:06.048 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.048 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:06.048 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:06.048 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:06.048 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.048 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:06.048 14:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.048 14:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.048 14:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.048 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:06.048 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:06.613 00:21:06.613 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.613 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.613 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.871 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.871 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.871 14:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.871 14:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.871 14:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.871 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.871 { 00:21:06.871 "cntlid": 79, 00:21:06.871 "qid": 0, 00:21:06.871 "state": "enabled", 00:21:06.871 "thread": "nvmf_tgt_poll_group_000", 00:21:06.871 "listen_address": { 00:21:06.871 "trtype": "TCP", 00:21:06.871 "adrfam": "IPv4", 00:21:06.871 "traddr": "10.0.0.2", 00:21:06.871 "trsvcid": "4420" 00:21:06.871 }, 00:21:06.871 "peer_address": { 00:21:06.871 "trtype": "TCP", 00:21:06.871 "adrfam": "IPv4", 00:21:06.871 "traddr": "10.0.0.1", 00:21:06.871 "trsvcid": "33648" 00:21:06.871 }, 00:21:06.871 "auth": { 00:21:06.871 "state": "completed", 00:21:06.871 "digest": "sha384", 00:21:06.871 "dhgroup": "ffdhe4096" 00:21:06.871 } 00:21:06.871 } 00:21:06.871 ]' 00:21:06.871 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.871 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.871 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.871 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:06.871 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.871 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.871 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.871 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.128 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTE0ZDExMDQzZTMzODVlMGEyNDZhYmZlNTcyMjYyZTZlMzcxODkxMjJkMmFhMWM1NWEyNjkwOGM1M2VjNmNiOZZjMxE=: 00:21:08.061 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.061 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.061 14:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.061 14:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.061 14:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.061 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.061 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.061 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:08.061 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:08.319 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:08.319 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.319 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:08.319 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:08.319 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:08.319 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.319 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.319 14:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.319 14:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.319 14:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.319 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.319 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.885 00:21:08.885 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.885 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.885 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.143 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.143 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.143 14:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.143 14:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.143 14:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.143 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.143 { 00:21:09.143 "cntlid": 81, 00:21:09.143 "qid": 0, 00:21:09.143 "state": "enabled", 00:21:09.143 "thread": "nvmf_tgt_poll_group_000", 00:21:09.143 "listen_address": { 00:21:09.143 "trtype": "TCP", 00:21:09.143 "adrfam": "IPv4", 00:21:09.143 "traddr": "10.0.0.2", 00:21:09.143 "trsvcid": "4420" 00:21:09.143 }, 00:21:09.143 "peer_address": { 00:21:09.143 "trtype": "TCP", 00:21:09.143 "adrfam": "IPv4", 00:21:09.143 "traddr": "10.0.0.1", 00:21:09.143 "trsvcid": "33678" 00:21:09.143 }, 00:21:09.143 "auth": { 00:21:09.143 "state": "completed", 00:21:09.143 "digest": "sha384", 00:21:09.143 "dhgroup": "ffdhe6144" 00:21:09.143 } 00:21:09.143 } 00:21:09.143 ]' 00:21:09.143 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.143 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.143 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.143 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:09.143 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.143 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.143 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.143 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.401 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjA0ZGE0YTM0OWIwZGQ4MmZjZjI2NzhjYzljY2IxMWY5ZGYzZjQ2Yzg2MjFmYjZlFp19Nw==: --dhchap-ctrl-secret DHHC-1:03:MjZlOWY2YTM1OWY0OTFhNmM5NjMwZmZjOTk5NWM4NThlYzQwNzJmOWNhZDY1MDY4MDY0MzhhMjAwYzFkZTU3Mhggswg=: 00:21:10.776 14:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.776 14:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.776 14:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.776 14:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.776 14:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.776 14:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.776 14:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:10.776 14:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:10.776 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:10.776 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.776 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:10.776 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:10.776 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:10.776 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.776 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.776 14:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.776 14:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.776 14:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.776 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.776 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.342 00:21:11.342 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.342 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.342 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.600 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.600 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.600 14:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.600 14:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.600 14:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.600 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.600 { 00:21:11.600 "cntlid": 83, 00:21:11.600 "qid": 0, 00:21:11.600 "state": "enabled", 00:21:11.600 "thread": "nvmf_tgt_poll_group_000", 00:21:11.600 "listen_address": { 00:21:11.600 "trtype": "TCP", 00:21:11.600 "adrfam": "IPv4", 00:21:11.600 "traddr": "10.0.0.2", 00:21:11.600 "trsvcid": "4420" 00:21:11.600 }, 00:21:11.600 "peer_address": { 00:21:11.600 "trtype": "TCP", 00:21:11.600 "adrfam": "IPv4", 00:21:11.600 "traddr": "10.0.0.1", 00:21:11.600 "trsvcid": "33704" 00:21:11.600 }, 00:21:11.600 "auth": { 00:21:11.600 "state": "completed", 00:21:11.600 "digest": "sha384", 00:21:11.600 "dhgroup": "ffdhe6144" 00:21:11.600 } 00:21:11.600 } 00:21:11.600 ]' 00:21:11.600 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.600 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.600 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.858 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:11.858 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.858 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.858 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.858 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.116 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjNiYTA0MmIzNzM1YTIyYmI5YjYzOGRhYWJiNTQ4ZDCTSXJ2: --dhchap-ctrl-secret DHHC-1:02:MWU0NDYyNGIzNjI4YWIxMWI2NzBhNWE3MWVlM2MxZjg1MjBkYTM1NzUyZTE5NWUzcFigkQ==: 00:21:13.050 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.050 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.050 14:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.050 14:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.050 14:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.050 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.050 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:13.050 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:13.308 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:13.308 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.308 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:13.308 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:13.308 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:13.308 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.308 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.308 14:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.308 14:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.308 14:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.308 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.308 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.873 00:21:13.873 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.873 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.873 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.131 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.131 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.131 14:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.131 14:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.131 14:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.131 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:14.131 { 00:21:14.131 "cntlid": 85, 00:21:14.131 "qid": 0, 00:21:14.131 "state": "enabled", 00:21:14.131 "thread": "nvmf_tgt_poll_group_000", 00:21:14.131 "listen_address": { 00:21:14.131 "trtype": "TCP", 00:21:14.131 "adrfam": "IPv4", 00:21:14.131 "traddr": "10.0.0.2", 00:21:14.131 "trsvcid": "4420" 00:21:14.131 }, 00:21:14.131 "peer_address": { 00:21:14.131 "trtype": "TCP", 00:21:14.131 "adrfam": "IPv4", 00:21:14.131 "traddr": "10.0.0.1", 00:21:14.131 "trsvcid": "44360" 00:21:14.131 }, 00:21:14.131 "auth": { 00:21:14.131 "state": "completed", 00:21:14.131 "digest": "sha384", 00:21:14.131 "dhgroup": "ffdhe6144" 00:21:14.131 } 00:21:14.131 } 00:21:14.131 ]' 00:21:14.131 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:14.131 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.131 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.131 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:14.131 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:14.131 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.131 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.131 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.395 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWQzN2E0ZjQ3YjZlZWQxNjM4NmMyMjM0N2U5MjNkNGEzZjUxNDEzMzE2MmMwMGE1LClXzg==: --dhchap-ctrl-secret DHHC-1:01:ZmQwNjE2NWY1M2Y0ZjMyMGIyMmNhY2NiZDc4YWE2MDcwRJK2: 00:21:15.377 14:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.377 14:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.377 14:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.377 14:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.377 14:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.377 14:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.377 14:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:15.377 14:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:15.635 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:15.635 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.635 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:15.635 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:15.635 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:15.635 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.635 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:15.635 14:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.635 14:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.892 14:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.892 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:15.892 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:16.457 00:21:16.457 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:16.457 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.457 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:16.715 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.715 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.715 14:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.715 14:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.715 14:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.715 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.715 { 00:21:16.715 "cntlid": 87, 00:21:16.715 "qid": 0, 00:21:16.715 "state": "enabled", 00:21:16.715 "thread": "nvmf_tgt_poll_group_000", 00:21:16.715 "listen_address": { 00:21:16.715 "trtype": "TCP", 00:21:16.715 "adrfam": "IPv4", 00:21:16.715 "traddr": "10.0.0.2", 00:21:16.715 "trsvcid": "4420" 00:21:16.715 }, 00:21:16.715 "peer_address": { 00:21:16.715 "trtype": "TCP", 00:21:16.715 "adrfam": "IPv4", 00:21:16.715 "traddr": "10.0.0.1", 00:21:16.715 "trsvcid": "44380" 00:21:16.715 }, 00:21:16.715 "auth": { 00:21:16.715 "state": "completed", 00:21:16.715 "digest": "sha384", 00:21:16.715 "dhgroup": "ffdhe6144" 00:21:16.715 } 00:21:16.715 } 00:21:16.715 ]' 00:21:16.715 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.715 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.715 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.715 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:16.715 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.716 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.716 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.716 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.973 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTE0ZDExMDQzZTMzODVlMGEyNDZhYmZlNTcyMjYyZTZlMzcxODkxMjJkMmFhMWM1NWEyNjkwOGM1M2VjNmNiOZZjMxE=: 00:21:17.907 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.907 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.907 14:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.907 14:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.907 14:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.907 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.907 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.907 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:17.907 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:18.165 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:18.165 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:18.165 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:18.165 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:18.165 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:18.165 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.165 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.165 14:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.165 14:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.165 14:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.165 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.165 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.099 00:21:19.099 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.099 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.099 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.357 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.357 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.357 14:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.357 14:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.357 14:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.358 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.358 { 00:21:19.358 "cntlid": 89, 00:21:19.358 "qid": 0, 00:21:19.358 "state": "enabled", 00:21:19.358 "thread": "nvmf_tgt_poll_group_000", 00:21:19.358 "listen_address": { 00:21:19.358 "trtype": "TCP", 00:21:19.358 "adrfam": "IPv4", 00:21:19.358 "traddr": "10.0.0.2", 00:21:19.358 "trsvcid": "4420" 00:21:19.358 }, 00:21:19.358 "peer_address": { 00:21:19.358 "trtype": "TCP", 00:21:19.358 "adrfam": "IPv4", 00:21:19.358 "traddr": "10.0.0.1", 00:21:19.358 "trsvcid": "44396" 00:21:19.358 }, 00:21:19.358 "auth": { 00:21:19.358 "state": "completed", 00:21:19.358 "digest": "sha384", 00:21:19.358 "dhgroup": "ffdhe8192" 00:21:19.358 } 00:21:19.358 } 00:21:19.358 ]' 00:21:19.358 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.358 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.358 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:19.358 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:19.358 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.615 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.615 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.615 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.873 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjA0ZGE0YTM0OWIwZGQ4MmZjZjI2NzhjYzljY2IxMWY5ZGYzZjQ2Yzg2MjFmYjZlFp19Nw==: --dhchap-ctrl-secret DHHC-1:03:MjZlOWY2YTM1OWY0OTFhNmM5NjMwZmZjOTk5NWM4NThlYzQwNzJmOWNhZDY1MDY4MDY0MzhhMjAwYzFkZTU3Mhggswg=: 00:21:20.808 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.808 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.808 14:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.808 14:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.808 14:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.808 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:20.808 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:20.808 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:21.066 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:21.066 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.066 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:21.066 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:21.066 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:21.066 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.066 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.066 14:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.066 14:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.066 14:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.066 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.066 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.000 00:21:22.000 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.000 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.000 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.000 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.000 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.000 14:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.000 14:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.258 14:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.258 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.258 { 00:21:22.258 "cntlid": 91, 00:21:22.258 "qid": 0, 00:21:22.258 "state": "enabled", 00:21:22.258 "thread": "nvmf_tgt_poll_group_000", 00:21:22.258 "listen_address": { 00:21:22.258 "trtype": "TCP", 00:21:22.258 "adrfam": "IPv4", 00:21:22.258 "traddr": "10.0.0.2", 00:21:22.258 "trsvcid": "4420" 00:21:22.258 }, 00:21:22.258 "peer_address": { 00:21:22.258 "trtype": "TCP", 00:21:22.258 "adrfam": "IPv4", 00:21:22.258 "traddr": "10.0.0.1", 00:21:22.258 "trsvcid": "44424" 00:21:22.258 }, 00:21:22.258 "auth": { 00:21:22.258 "state": "completed", 00:21:22.258 "digest": "sha384", 00:21:22.258 "dhgroup": "ffdhe8192" 00:21:22.258 } 00:21:22.258 } 00:21:22.258 ]' 00:21:22.258 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.258 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.258 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.258 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:22.258 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.258 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.258 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.258 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.516 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjNiYTA0MmIzNzM1YTIyYmI5YjYzOGRhYWJiNTQ4ZDCTSXJ2: --dhchap-ctrl-secret DHHC-1:02:MWU0NDYyNGIzNjI4YWIxMWI2NzBhNWE3MWVlM2MxZjg1MjBkYTM1NzUyZTE5NWUzcFigkQ==: 00:21:23.449 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.449 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.449 14:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.449 14:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.449 14:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.449 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.449 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:23.449 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:23.705 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:23.705 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.705 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:23.705 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:23.705 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:23.705 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.705 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.705 14:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.705 14:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.705 14:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.705 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.705 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.637 00:21:24.637 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.637 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.637 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.895 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.895 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.895 14:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.895 14:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.895 14:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.895 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.895 { 00:21:24.895 "cntlid": 93, 00:21:24.895 "qid": 0, 00:21:24.895 "state": "enabled", 00:21:24.895 "thread": "nvmf_tgt_poll_group_000", 00:21:24.895 "listen_address": { 00:21:24.895 "trtype": "TCP", 00:21:24.895 "adrfam": "IPv4", 00:21:24.895 "traddr": "10.0.0.2", 00:21:24.895 "trsvcid": "4420" 00:21:24.895 }, 00:21:24.895 "peer_address": { 00:21:24.895 "trtype": "TCP", 00:21:24.895 "adrfam": "IPv4", 00:21:24.895 "traddr": "10.0.0.1", 00:21:24.895 "trsvcid": "39294" 00:21:24.895 }, 00:21:24.895 "auth": { 00:21:24.895 "state": "completed", 00:21:24.895 "digest": "sha384", 00:21:24.895 "dhgroup": "ffdhe8192" 00:21:24.895 } 00:21:24.895 } 00:21:24.895 ]' 00:21:24.895 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.895 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.895 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:25.154 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:25.154 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:25.154 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.154 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.154 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.412 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWQzN2E0ZjQ3YjZlZWQxNjM4NmMyMjM0N2U5MjNkNGEzZjUxNDEzMzE2MmMwMGE1LClXzg==: --dhchap-ctrl-secret DHHC-1:01:ZmQwNjE2NWY1M2Y0ZjMyMGIyMmNhY2NiZDc4YWE2MDcwRJK2: 00:21:26.346 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.346 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.346 14:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.346 14:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.346 14:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.346 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.346 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:26.346 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:26.604 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:26.604 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.604 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:26.604 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:26.604 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:26.604 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.604 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:26.604 14:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.604 14:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.604 14:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.604 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:26.604 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:27.538 00:21:27.538 14:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.538 14:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.538 14:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.796 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.796 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.796 14:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.796 14:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.796 14:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.796 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.796 { 00:21:27.796 "cntlid": 95, 00:21:27.796 "qid": 0, 00:21:27.796 "state": "enabled", 00:21:27.796 "thread": "nvmf_tgt_poll_group_000", 00:21:27.796 "listen_address": { 00:21:27.796 "trtype": "TCP", 00:21:27.796 "adrfam": "IPv4", 00:21:27.796 "traddr": "10.0.0.2", 00:21:27.796 "trsvcid": "4420" 00:21:27.796 }, 00:21:27.796 "peer_address": { 00:21:27.796 "trtype": "TCP", 00:21:27.796 "adrfam": "IPv4", 00:21:27.796 "traddr": "10.0.0.1", 00:21:27.796 "trsvcid": "39316" 00:21:27.796 }, 00:21:27.796 "auth": { 00:21:27.796 "state": "completed", 00:21:27.796 "digest": "sha384", 00:21:27.796 "dhgroup": "ffdhe8192" 00:21:27.796 } 00:21:27.796 } 00:21:27.796 ]' 00:21:27.796 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.796 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.796 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.796 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:27.796 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.796 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.796 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.796 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.052 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTE0ZDExMDQzZTMzODVlMGEyNDZhYmZlNTcyMjYyZTZlMzcxODkxMjJkMmFhMWM1NWEyNjkwOGM1M2VjNmNiOZZjMxE=: 00:21:28.982 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.982 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.982 14:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.982 14:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.982 14:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.983 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:28.983 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.983 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.983 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:28.983 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:29.240 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:29.240 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:29.240 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:29.240 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:29.240 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:29.240 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.240 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.240 14:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.240 14:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.240 14:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.240 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.240 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.803 00:21:29.803 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.803 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.803 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.803 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.803 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.803 14:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.804 14:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.804 14:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.804 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.804 { 00:21:29.804 "cntlid": 97, 00:21:29.804 "qid": 0, 00:21:29.804 "state": "enabled", 00:21:29.804 "thread": "nvmf_tgt_poll_group_000", 00:21:29.804 "listen_address": { 00:21:29.804 "trtype": "TCP", 00:21:29.804 "adrfam": "IPv4", 00:21:29.804 "traddr": "10.0.0.2", 00:21:29.804 "trsvcid": "4420" 00:21:29.804 }, 00:21:29.804 "peer_address": { 00:21:29.804 "trtype": "TCP", 00:21:29.804 "adrfam": "IPv4", 00:21:29.804 "traddr": "10.0.0.1", 00:21:29.804 "trsvcid": "39336" 00:21:29.804 }, 00:21:29.804 "auth": { 00:21:29.804 "state": "completed", 00:21:29.804 "digest": "sha512", 00:21:29.804 "dhgroup": "null" 00:21:29.804 } 00:21:29.804 } 00:21:29.804 ]' 00:21:29.804 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:30.060 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.060 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:30.060 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:30.060 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:30.060 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.060 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.060 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.317 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjA0ZGE0YTM0OWIwZGQ4MmZjZjI2NzhjYzljY2IxMWY5ZGYzZjQ2Yzg2MjFmYjZlFp19Nw==: --dhchap-ctrl-secret DHHC-1:03:MjZlOWY2YTM1OWY0OTFhNmM5NjMwZmZjOTk5NWM4NThlYzQwNzJmOWNhZDY1MDY4MDY0MzhhMjAwYzFkZTU3Mhggswg=: 00:21:31.264 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.264 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.264 14:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.264 14:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.264 14:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.264 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:31.264 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:31.264 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:31.523 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:31.523 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.523 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.523 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:31.523 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:31.523 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.523 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.523 14:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.523 14:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.523 14:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.523 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.523 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.781 00:21:31.781 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.781 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.781 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.039 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.039 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.039 14:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.039 14:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.039 14:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.039 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.039 { 00:21:32.039 "cntlid": 99, 00:21:32.039 "qid": 0, 00:21:32.039 "state": "enabled", 00:21:32.039 "thread": "nvmf_tgt_poll_group_000", 00:21:32.039 "listen_address": { 00:21:32.039 "trtype": "TCP", 00:21:32.039 "adrfam": "IPv4", 00:21:32.039 "traddr": "10.0.0.2", 00:21:32.039 "trsvcid": "4420" 00:21:32.039 }, 00:21:32.039 "peer_address": { 00:21:32.039 "trtype": "TCP", 00:21:32.039 "adrfam": "IPv4", 00:21:32.039 "traddr": "10.0.0.1", 00:21:32.039 "trsvcid": "39354" 00:21:32.039 }, 00:21:32.039 "auth": { 00:21:32.039 "state": "completed", 00:21:32.039 "digest": "sha512", 00:21:32.039 "dhgroup": "null" 00:21:32.039 } 00:21:32.039 } 00:21:32.039 ]' 00:21:32.039 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.297 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.297 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.297 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:32.297 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.297 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.297 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.297 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.554 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjNiYTA0MmIzNzM1YTIyYmI5YjYzOGRhYWJiNTQ4ZDCTSXJ2: --dhchap-ctrl-secret DHHC-1:02:MWU0NDYyNGIzNjI4YWIxMWI2NzBhNWE3MWVlM2MxZjg1MjBkYTM1NzUyZTE5NWUzcFigkQ==: 00:21:33.486 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.487 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.487 14:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.487 14:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.487 14:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.487 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.487 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:33.487 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:33.744 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:33.744 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.744 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.744 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:33.744 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:33.744 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.744 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.744 14:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.744 14:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.744 14:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.744 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.744 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.002 00:21:34.260 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.260 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.260 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.260 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.260 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.260 14:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.260 14:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.517 14:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.517 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.517 { 00:21:34.517 "cntlid": 101, 00:21:34.517 "qid": 0, 00:21:34.517 "state": "enabled", 00:21:34.517 "thread": "nvmf_tgt_poll_group_000", 00:21:34.517 "listen_address": { 00:21:34.517 "trtype": "TCP", 00:21:34.517 "adrfam": "IPv4", 00:21:34.517 "traddr": "10.0.0.2", 00:21:34.517 "trsvcid": "4420" 00:21:34.517 }, 00:21:34.517 "peer_address": { 00:21:34.517 "trtype": "TCP", 00:21:34.517 "adrfam": "IPv4", 00:21:34.517 "traddr": "10.0.0.1", 00:21:34.517 "trsvcid": "33830" 00:21:34.517 }, 00:21:34.517 "auth": { 00:21:34.517 "state": "completed", 00:21:34.517 "digest": "sha512", 00:21:34.517 "dhgroup": "null" 00:21:34.517 } 00:21:34.517 } 00:21:34.517 ]' 00:21:34.517 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.517 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.517 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.517 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:34.517 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.517 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.517 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.517 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.775 14:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWQzN2E0ZjQ3YjZlZWQxNjM4NmMyMjM0N2U5MjNkNGEzZjUxNDEzMzE2MmMwMGE1LClXzg==: --dhchap-ctrl-secret DHHC-1:01:ZmQwNjE2NWY1M2Y0ZjMyMGIyMmNhY2NiZDc4YWE2MDcwRJK2: 00:21:35.705 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.705 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.705 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.705 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.705 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.705 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.705 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:35.705 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:35.963 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:35.963 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.963 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:35.963 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:35.963 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:35.963 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.963 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:35.963 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.963 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.963 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.963 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:35.963 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.220 00:21:36.220 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.220 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.220 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.478 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.478 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.478 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.478 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.478 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.478 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.478 { 00:21:36.478 "cntlid": 103, 00:21:36.478 "qid": 0, 00:21:36.478 "state": "enabled", 00:21:36.478 "thread": "nvmf_tgt_poll_group_000", 00:21:36.478 "listen_address": { 00:21:36.478 "trtype": "TCP", 00:21:36.478 "adrfam": "IPv4", 00:21:36.478 "traddr": "10.0.0.2", 00:21:36.478 "trsvcid": "4420" 00:21:36.478 }, 00:21:36.478 "peer_address": { 00:21:36.478 "trtype": "TCP", 00:21:36.478 "adrfam": "IPv4", 00:21:36.478 "traddr": "10.0.0.1", 00:21:36.478 "trsvcid": "33836" 00:21:36.478 }, 00:21:36.478 "auth": { 00:21:36.478 "state": "completed", 00:21:36.478 "digest": "sha512", 00:21:36.478 "dhgroup": "null" 00:21:36.478 } 00:21:36.478 } 00:21:36.478 ]' 00:21:36.478 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.478 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.478 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.735 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:36.735 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.735 14:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.735 14:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.735 14:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.994 14:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTE0ZDExMDQzZTMzODVlMGEyNDZhYmZlNTcyMjYyZTZlMzcxODkxMjJkMmFhMWM1NWEyNjkwOGM1M2VjNmNiOZZjMxE=: 00:21:37.927 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.927 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.927 14:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.927 14:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.927 14:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.927 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:37.927 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.927 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:37.927 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:38.185 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:38.185 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.185 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:38.185 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:38.185 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:38.185 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.185 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.185 14:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.185 14:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.185 14:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.185 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.185 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.750 00:21:38.750 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.750 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.750 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.750 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.750 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.750 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.750 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.750 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.750 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.750 { 00:21:38.750 "cntlid": 105, 00:21:38.750 "qid": 0, 00:21:38.750 "state": "enabled", 00:21:38.750 "thread": "nvmf_tgt_poll_group_000", 00:21:38.750 "listen_address": { 00:21:38.750 "trtype": "TCP", 00:21:38.750 "adrfam": "IPv4", 00:21:38.750 "traddr": "10.0.0.2", 00:21:38.750 "trsvcid": "4420" 00:21:38.750 }, 00:21:38.750 "peer_address": { 00:21:38.750 "trtype": "TCP", 00:21:38.750 "adrfam": "IPv4", 00:21:38.750 "traddr": "10.0.0.1", 00:21:38.750 "trsvcid": "33848" 00:21:38.750 }, 00:21:38.750 "auth": { 00:21:38.750 "state": "completed", 00:21:38.750 "digest": "sha512", 00:21:38.750 "dhgroup": "ffdhe2048" 00:21:38.750 } 00:21:38.750 } 00:21:38.750 ]' 00:21:38.750 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.008 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.008 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:39.008 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:39.008 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.008 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.008 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.008 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.266 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjA0ZGE0YTM0OWIwZGQ4MmZjZjI2NzhjYzljY2IxMWY5ZGYzZjQ2Yzg2MjFmYjZlFp19Nw==: --dhchap-ctrl-secret DHHC-1:03:MjZlOWY2YTM1OWY0OTFhNmM5NjMwZmZjOTk5NWM4NThlYzQwNzJmOWNhZDY1MDY4MDY0MzhhMjAwYzFkZTU3Mhggswg=: 00:21:40.197 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.197 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.197 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.197 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.197 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.197 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:40.197 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:40.197 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:40.454 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:40.454 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.454 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:40.454 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:40.454 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:40.454 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.454 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.454 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.454 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.454 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.454 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.454 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.020 00:21:41.020 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.020 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.020 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.020 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.020 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.020 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.020 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.020 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.020 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.020 { 00:21:41.020 "cntlid": 107, 00:21:41.020 "qid": 0, 00:21:41.020 "state": "enabled", 00:21:41.020 "thread": "nvmf_tgt_poll_group_000", 00:21:41.020 "listen_address": { 00:21:41.020 "trtype": "TCP", 00:21:41.020 "adrfam": "IPv4", 00:21:41.020 "traddr": "10.0.0.2", 00:21:41.020 "trsvcid": "4420" 00:21:41.020 }, 00:21:41.020 "peer_address": { 00:21:41.020 "trtype": "TCP", 00:21:41.020 "adrfam": "IPv4", 00:21:41.020 "traddr": "10.0.0.1", 00:21:41.020 "trsvcid": "33876" 00:21:41.020 }, 00:21:41.020 "auth": { 00:21:41.020 "state": "completed", 00:21:41.020 "digest": "sha512", 00:21:41.020 "dhgroup": "ffdhe2048" 00:21:41.020 } 00:21:41.020 } 00:21:41.020 ]' 00:21:41.020 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.278 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.278 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.278 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:41.278 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.278 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.278 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.278 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.537 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjNiYTA0MmIzNzM1YTIyYmI5YjYzOGRhYWJiNTQ4ZDCTSXJ2: --dhchap-ctrl-secret DHHC-1:02:MWU0NDYyNGIzNjI4YWIxMWI2NzBhNWE3MWVlM2MxZjg1MjBkYTM1NzUyZTE5NWUzcFigkQ==: 00:21:42.470 14:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.470 14:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.470 14:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.470 14:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.470 14:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.470 14:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.470 14:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:42.470 14:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:42.727 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:42.727 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.727 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.727 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:42.727 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:42.727 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.727 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.727 14:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.727 14:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.727 14:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.727 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.727 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.986 00:21:42.986 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.986 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.986 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.244 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.244 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.244 14:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.244 14:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.244 14:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.244 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.244 { 00:21:43.244 "cntlid": 109, 00:21:43.244 "qid": 0, 00:21:43.244 "state": "enabled", 00:21:43.244 "thread": "nvmf_tgt_poll_group_000", 00:21:43.244 "listen_address": { 00:21:43.244 "trtype": "TCP", 00:21:43.244 "adrfam": "IPv4", 00:21:43.244 "traddr": "10.0.0.2", 00:21:43.244 "trsvcid": "4420" 00:21:43.244 }, 00:21:43.244 "peer_address": { 00:21:43.244 "trtype": "TCP", 00:21:43.244 "adrfam": "IPv4", 00:21:43.244 "traddr": "10.0.0.1", 00:21:43.244 "trsvcid": "33902" 00:21:43.244 }, 00:21:43.244 "auth": { 00:21:43.244 "state": "completed", 00:21:43.244 "digest": "sha512", 00:21:43.244 "dhgroup": "ffdhe2048" 00:21:43.244 } 00:21:43.244 } 00:21:43.244 ]' 00:21:43.244 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.502 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.502 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.502 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:43.502 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.502 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.502 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.502 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.766 14:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWQzN2E0ZjQ3YjZlZWQxNjM4NmMyMjM0N2U5MjNkNGEzZjUxNDEzMzE2MmMwMGE1LClXzg==: --dhchap-ctrl-secret DHHC-1:01:ZmQwNjE2NWY1M2Y0ZjMyMGIyMmNhY2NiZDc4YWE2MDcwRJK2: 00:21:44.727 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.727 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.727 14:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.727 14:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.727 14:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.727 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.727 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:44.727 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:44.985 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:44.985 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.986 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:44.986 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:44.986 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:44.986 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.986 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:44.986 14:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.986 14:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.986 14:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.986 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:44.986 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:45.244 00:21:45.244 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.244 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.244 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.502 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.502 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.502 14:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.502 14:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.502 14:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.502 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.502 { 00:21:45.502 "cntlid": 111, 00:21:45.502 "qid": 0, 00:21:45.502 "state": "enabled", 00:21:45.502 "thread": "nvmf_tgt_poll_group_000", 00:21:45.502 "listen_address": { 00:21:45.502 "trtype": "TCP", 00:21:45.502 "adrfam": "IPv4", 00:21:45.502 "traddr": "10.0.0.2", 00:21:45.502 "trsvcid": "4420" 00:21:45.502 }, 00:21:45.502 "peer_address": { 00:21:45.502 "trtype": "TCP", 00:21:45.502 "adrfam": "IPv4", 00:21:45.502 "traddr": "10.0.0.1", 00:21:45.502 "trsvcid": "48070" 00:21:45.502 }, 00:21:45.502 "auth": { 00:21:45.502 "state": "completed", 00:21:45.502 "digest": "sha512", 00:21:45.502 "dhgroup": "ffdhe2048" 00:21:45.502 } 00:21:45.502 } 00:21:45.502 ]' 00:21:45.502 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.502 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.760 14:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.760 14:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:45.760 14:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.760 14:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.760 14:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.760 14:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.018 14:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTE0ZDExMDQzZTMzODVlMGEyNDZhYmZlNTcyMjYyZTZlMzcxODkxMjJkMmFhMWM1NWEyNjkwOGM1M2VjNmNiOZZjMxE=: 00:21:46.952 14:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.952 14:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.952 14:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.952 14:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.952 14:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.952 14:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:46.952 14:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.952 14:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:46.952 14:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:47.210 14:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:47.210 14:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.210 14:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:47.210 14:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:47.210 14:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:47.210 14:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.210 14:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.210 14:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.210 14:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.210 14:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.210 14:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.210 14:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.468 00:21:47.468 14:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.468 14:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.468 14:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.726 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.726 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.726 14:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.726 14:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.726 14:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.726 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.726 { 00:21:47.726 "cntlid": 113, 00:21:47.726 "qid": 0, 00:21:47.726 "state": "enabled", 00:21:47.726 "thread": "nvmf_tgt_poll_group_000", 00:21:47.726 "listen_address": { 00:21:47.726 "trtype": "TCP", 00:21:47.726 "adrfam": "IPv4", 00:21:47.726 "traddr": "10.0.0.2", 00:21:47.726 "trsvcid": "4420" 00:21:47.726 }, 00:21:47.726 "peer_address": { 00:21:47.726 "trtype": "TCP", 00:21:47.726 "adrfam": "IPv4", 00:21:47.726 "traddr": "10.0.0.1", 00:21:47.726 "trsvcid": "48088" 00:21:47.726 }, 00:21:47.726 "auth": { 00:21:47.726 "state": "completed", 00:21:47.726 "digest": "sha512", 00:21:47.726 "dhgroup": "ffdhe3072" 00:21:47.726 } 00:21:47.726 } 00:21:47.726 ]' 00:21:47.726 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.985 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.985 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.985 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:47.985 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.985 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.985 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.985 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.243 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjA0ZGE0YTM0OWIwZGQ4MmZjZjI2NzhjYzljY2IxMWY5ZGYzZjQ2Yzg2MjFmYjZlFp19Nw==: --dhchap-ctrl-secret DHHC-1:03:MjZlOWY2YTM1OWY0OTFhNmM5NjMwZmZjOTk5NWM4NThlYzQwNzJmOWNhZDY1MDY4MDY0MzhhMjAwYzFkZTU3Mhggswg=: 00:21:49.176 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.176 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.176 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.176 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.176 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.176 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:49.176 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:49.176 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:49.433 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:49.433 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.433 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:49.433 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:49.433 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:49.433 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.433 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.433 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.433 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.433 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.433 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.433 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.691 00:21:49.691 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.691 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.691 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.949 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.949 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.949 14:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.949 14:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.949 14:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.949 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.949 { 00:21:49.949 "cntlid": 115, 00:21:49.949 "qid": 0, 00:21:49.949 "state": "enabled", 00:21:49.949 "thread": "nvmf_tgt_poll_group_000", 00:21:49.949 "listen_address": { 00:21:49.949 "trtype": "TCP", 00:21:49.949 "adrfam": "IPv4", 00:21:49.949 "traddr": "10.0.0.2", 00:21:49.949 "trsvcid": "4420" 00:21:49.949 }, 00:21:49.949 "peer_address": { 00:21:49.949 "trtype": "TCP", 00:21:49.949 "adrfam": "IPv4", 00:21:49.949 "traddr": "10.0.0.1", 00:21:49.949 "trsvcid": "48106" 00:21:49.949 }, 00:21:49.949 "auth": { 00:21:49.949 "state": "completed", 00:21:49.949 "digest": "sha512", 00:21:49.949 "dhgroup": "ffdhe3072" 00:21:49.949 } 00:21:49.949 } 00:21:49.949 ]' 00:21:49.949 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.207 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.207 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.207 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:50.207 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.207 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.207 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.207 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.465 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjNiYTA0MmIzNzM1YTIyYmI5YjYzOGRhYWJiNTQ4ZDCTSXJ2: --dhchap-ctrl-secret DHHC-1:02:MWU0NDYyNGIzNjI4YWIxMWI2NzBhNWE3MWVlM2MxZjg1MjBkYTM1NzUyZTE5NWUzcFigkQ==: 00:21:51.399 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.399 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.399 14:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.399 14:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.399 14:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.399 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.399 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:51.399 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:51.657 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:51.657 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.657 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:51.657 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:51.657 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:51.657 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.657 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.657 14:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.657 14:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.657 14:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.657 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.657 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.916 00:21:51.916 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.916 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.916 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.174 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.174 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.174 14:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.174 14:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.174 14:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.174 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.174 { 00:21:52.174 "cntlid": 117, 00:21:52.174 "qid": 0, 00:21:52.174 "state": "enabled", 00:21:52.174 "thread": "nvmf_tgt_poll_group_000", 00:21:52.174 "listen_address": { 00:21:52.174 "trtype": "TCP", 00:21:52.174 "adrfam": "IPv4", 00:21:52.174 "traddr": "10.0.0.2", 00:21:52.174 "trsvcid": "4420" 00:21:52.174 }, 00:21:52.174 "peer_address": { 00:21:52.174 "trtype": "TCP", 00:21:52.174 "adrfam": "IPv4", 00:21:52.174 "traddr": "10.0.0.1", 00:21:52.174 "trsvcid": "48134" 00:21:52.174 }, 00:21:52.174 "auth": { 00:21:52.174 "state": "completed", 00:21:52.174 "digest": "sha512", 00:21:52.174 "dhgroup": "ffdhe3072" 00:21:52.174 } 00:21:52.174 } 00:21:52.174 ]' 00:21:52.174 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.432 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.432 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.432 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:52.432 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.432 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.432 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.432 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.689 14:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWQzN2E0ZjQ3YjZlZWQxNjM4NmMyMjM0N2U5MjNkNGEzZjUxNDEzMzE2MmMwMGE1LClXzg==: --dhchap-ctrl-secret DHHC-1:01:ZmQwNjE2NWY1M2Y0ZjMyMGIyMmNhY2NiZDc4YWE2MDcwRJK2: 00:21:53.619 14:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.619 14:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.619 14:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.619 14:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.619 14:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.619 14:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.619 14:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:53.619 14:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:53.876 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:53.876 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.876 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:53.876 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:53.876 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:53.876 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.876 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:53.876 14:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.876 14:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.876 14:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.876 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:53.876 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:54.132 00:21:54.132 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:54.132 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.132 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.388 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.388 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.388 14:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.388 14:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.388 14:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.388 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:54.388 { 00:21:54.388 "cntlid": 119, 00:21:54.388 "qid": 0, 00:21:54.388 "state": "enabled", 00:21:54.388 "thread": "nvmf_tgt_poll_group_000", 00:21:54.388 "listen_address": { 00:21:54.388 "trtype": "TCP", 00:21:54.388 "adrfam": "IPv4", 00:21:54.388 "traddr": "10.0.0.2", 00:21:54.388 "trsvcid": "4420" 00:21:54.388 }, 00:21:54.388 "peer_address": { 00:21:54.388 "trtype": "TCP", 00:21:54.388 "adrfam": "IPv4", 00:21:54.388 "traddr": "10.0.0.1", 00:21:54.388 "trsvcid": "51492" 00:21:54.388 }, 00:21:54.388 "auth": { 00:21:54.388 "state": "completed", 00:21:54.389 "digest": "sha512", 00:21:54.389 "dhgroup": "ffdhe3072" 00:21:54.389 } 00:21:54.389 } 00:21:54.389 ]' 00:21:54.389 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:54.644 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.644 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:54.644 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:54.644 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:54.644 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.644 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.644 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.901 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTE0ZDExMDQzZTMzODVlMGEyNDZhYmZlNTcyMjYyZTZlMzcxODkxMjJkMmFhMWM1NWEyNjkwOGM1M2VjNmNiOZZjMxE=: 00:21:55.831 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.831 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.831 14:24:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.831 14:24:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.831 14:24:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.831 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:55.831 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:55.831 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:55.831 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:56.089 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:56.089 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.089 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:56.089 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:56.089 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:56.089 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.089 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.089 14:24:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.089 14:24:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.089 14:24:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.089 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.089 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.347 00:21:56.347 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:56.347 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:56.347 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.605 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.605 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.605 14:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.605 14:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.605 14:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.605 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:56.605 { 00:21:56.605 "cntlid": 121, 00:21:56.605 "qid": 0, 00:21:56.605 "state": "enabled", 00:21:56.605 "thread": "nvmf_tgt_poll_group_000", 00:21:56.605 "listen_address": { 00:21:56.605 "trtype": "TCP", 00:21:56.605 "adrfam": "IPv4", 00:21:56.605 "traddr": "10.0.0.2", 00:21:56.605 "trsvcid": "4420" 00:21:56.605 }, 00:21:56.605 "peer_address": { 00:21:56.605 "trtype": "TCP", 00:21:56.605 "adrfam": "IPv4", 00:21:56.605 "traddr": "10.0.0.1", 00:21:56.605 "trsvcid": "51520" 00:21:56.605 }, 00:21:56.605 "auth": { 00:21:56.605 "state": "completed", 00:21:56.605 "digest": "sha512", 00:21:56.605 "dhgroup": "ffdhe4096" 00:21:56.605 } 00:21:56.605 } 00:21:56.605 ]' 00:21:56.605 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:56.863 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.863 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:56.863 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:56.863 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:56.863 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.863 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.863 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.121 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjA0ZGE0YTM0OWIwZGQ4MmZjZjI2NzhjYzljY2IxMWY5ZGYzZjQ2Yzg2MjFmYjZlFp19Nw==: --dhchap-ctrl-secret DHHC-1:03:MjZlOWY2YTM1OWY0OTFhNmM5NjMwZmZjOTk5NWM4NThlYzQwNzJmOWNhZDY1MDY4MDY0MzhhMjAwYzFkZTU3Mhggswg=: 00:21:58.054 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.054 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.054 14:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.054 14:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.054 14:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.054 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:58.054 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:58.054 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:58.312 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:58.312 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.312 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:58.312 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:58.312 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:58.312 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.312 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.312 14:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.312 14:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.312 14:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.312 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.312 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.878 00:21:58.878 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.878 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.878 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.136 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.136 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.136 14:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.136 14:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.136 14:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.136 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.136 { 00:21:59.136 "cntlid": 123, 00:21:59.136 "qid": 0, 00:21:59.136 "state": "enabled", 00:21:59.136 "thread": "nvmf_tgt_poll_group_000", 00:21:59.136 "listen_address": { 00:21:59.136 "trtype": "TCP", 00:21:59.136 "adrfam": "IPv4", 00:21:59.136 "traddr": "10.0.0.2", 00:21:59.136 "trsvcid": "4420" 00:21:59.136 }, 00:21:59.136 "peer_address": { 00:21:59.136 "trtype": "TCP", 00:21:59.136 "adrfam": "IPv4", 00:21:59.136 "traddr": "10.0.0.1", 00:21:59.136 "trsvcid": "51558" 00:21:59.136 }, 00:21:59.136 "auth": { 00:21:59.136 "state": "completed", 00:21:59.136 "digest": "sha512", 00:21:59.136 "dhgroup": "ffdhe4096" 00:21:59.136 } 00:21:59.136 } 00:21:59.136 ]' 00:21:59.136 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.136 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.136 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.136 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:59.136 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.136 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.136 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.136 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.394 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjNiYTA0MmIzNzM1YTIyYmI5YjYzOGRhYWJiNTQ4ZDCTSXJ2: --dhchap-ctrl-secret DHHC-1:02:MWU0NDYyNGIzNjI4YWIxMWI2NzBhNWE3MWVlM2MxZjg1MjBkYTM1NzUyZTE5NWUzcFigkQ==: 00:22:00.328 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.328 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.328 14:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.328 14:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.328 14:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.328 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:00.328 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:00.328 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:00.587 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:22:00.587 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:00.587 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:00.587 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:00.587 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:00.587 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.587 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.587 14:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.587 14:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.587 14:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.587 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.587 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.153 00:22:01.153 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:01.153 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.153 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:01.411 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.411 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.411 14:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.411 14:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.411 14:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.411 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:01.411 { 00:22:01.411 "cntlid": 125, 00:22:01.411 "qid": 0, 00:22:01.411 "state": "enabled", 00:22:01.411 "thread": "nvmf_tgt_poll_group_000", 00:22:01.411 "listen_address": { 00:22:01.411 "trtype": "TCP", 00:22:01.411 "adrfam": "IPv4", 00:22:01.411 "traddr": "10.0.0.2", 00:22:01.411 "trsvcid": "4420" 00:22:01.411 }, 00:22:01.411 "peer_address": { 00:22:01.411 "trtype": "TCP", 00:22:01.411 "adrfam": "IPv4", 00:22:01.411 "traddr": "10.0.0.1", 00:22:01.411 "trsvcid": "51584" 00:22:01.411 }, 00:22:01.411 "auth": { 00:22:01.411 "state": "completed", 00:22:01.411 "digest": "sha512", 00:22:01.411 "dhgroup": "ffdhe4096" 00:22:01.411 } 00:22:01.411 } 00:22:01.411 ]' 00:22:01.411 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:01.411 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.411 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:01.411 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:01.411 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:01.411 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.411 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.411 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.669 14:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWQzN2E0ZjQ3YjZlZWQxNjM4NmMyMjM0N2U5MjNkNGEzZjUxNDEzMzE2MmMwMGE1LClXzg==: --dhchap-ctrl-secret DHHC-1:01:ZmQwNjE2NWY1M2Y0ZjMyMGIyMmNhY2NiZDc4YWE2MDcwRJK2: 00:22:02.600 14:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.600 14:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.600 14:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.600 14:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.600 14:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.600 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:02.600 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.600 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.858 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:22:02.858 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:02.858 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:02.858 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:02.858 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:02.858 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.858 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:02.858 14:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.858 14:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.858 14:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.858 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:02.858 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:03.424 00:22:03.424 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:03.424 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.424 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:03.681 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.681 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.681 14:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.681 14:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.681 14:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.681 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:03.681 { 00:22:03.681 "cntlid": 127, 00:22:03.681 "qid": 0, 00:22:03.681 "state": "enabled", 00:22:03.681 "thread": "nvmf_tgt_poll_group_000", 00:22:03.681 "listen_address": { 00:22:03.681 "trtype": "TCP", 00:22:03.681 "adrfam": "IPv4", 00:22:03.681 "traddr": "10.0.0.2", 00:22:03.681 "trsvcid": "4420" 00:22:03.681 }, 00:22:03.681 "peer_address": { 00:22:03.681 "trtype": "TCP", 00:22:03.681 "adrfam": "IPv4", 00:22:03.681 "traddr": "10.0.0.1", 00:22:03.681 "trsvcid": "51626" 00:22:03.681 }, 00:22:03.681 "auth": { 00:22:03.681 "state": "completed", 00:22:03.681 "digest": "sha512", 00:22:03.681 "dhgroup": "ffdhe4096" 00:22:03.681 } 00:22:03.681 } 00:22:03.681 ]' 00:22:03.681 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:03.681 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.681 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:03.681 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:03.681 14:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:03.681 14:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.681 14:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.681 14:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.937 14:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTE0ZDExMDQzZTMzODVlMGEyNDZhYmZlNTcyMjYyZTZlMzcxODkxMjJkMmFhMWM1NWEyNjkwOGM1M2VjNmNiOZZjMxE=: 00:22:04.867 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.867 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.867 14:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.867 14:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.867 14:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.867 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:04.867 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:04.867 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:04.867 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:05.125 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:22:05.125 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.125 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:05.125 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:05.125 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:05.125 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.125 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.125 14:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.125 14:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.125 14:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.125 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.125 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.691 00:22:05.691 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:05.691 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:05.691 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.948 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.948 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.948 14:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.948 14:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.948 14:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.948 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:05.948 { 00:22:05.948 "cntlid": 129, 00:22:05.948 "qid": 0, 00:22:05.948 "state": "enabled", 00:22:05.948 "thread": "nvmf_tgt_poll_group_000", 00:22:05.948 "listen_address": { 00:22:05.948 "trtype": "TCP", 00:22:05.948 "adrfam": "IPv4", 00:22:05.948 "traddr": "10.0.0.2", 00:22:05.948 "trsvcid": "4420" 00:22:05.948 }, 00:22:05.948 "peer_address": { 00:22:05.948 "trtype": "TCP", 00:22:05.948 "adrfam": "IPv4", 00:22:05.948 "traddr": "10.0.0.1", 00:22:05.948 "trsvcid": "33476" 00:22:05.948 }, 00:22:05.948 "auth": { 00:22:05.948 "state": "completed", 00:22:05.948 "digest": "sha512", 00:22:05.948 "dhgroup": "ffdhe6144" 00:22:05.948 } 00:22:05.948 } 00:22:05.948 ]' 00:22:05.948 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:05.948 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.948 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:05.948 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:05.948 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.206 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.206 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.206 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.464 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjA0ZGE0YTM0OWIwZGQ4MmZjZjI2NzhjYzljY2IxMWY5ZGYzZjQ2Yzg2MjFmYjZlFp19Nw==: --dhchap-ctrl-secret DHHC-1:03:MjZlOWY2YTM1OWY0OTFhNmM5NjMwZmZjOTk5NWM4NThlYzQwNzJmOWNhZDY1MDY4MDY0MzhhMjAwYzFkZTU3Mhggswg=: 00:22:07.397 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.397 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.397 14:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.397 14:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.397 14:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.397 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:07.397 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:07.397 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:07.654 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:07.654 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.654 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:07.655 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:07.655 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:07.655 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.655 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.655 14:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.655 14:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.655 14:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.655 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.655 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.219 00:22:08.219 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:08.219 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.219 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.477 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.477 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.477 14:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.477 14:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.477 14:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.477 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:08.477 { 00:22:08.477 "cntlid": 131, 00:22:08.477 "qid": 0, 00:22:08.477 "state": "enabled", 00:22:08.477 "thread": "nvmf_tgt_poll_group_000", 00:22:08.477 "listen_address": { 00:22:08.477 "trtype": "TCP", 00:22:08.477 "adrfam": "IPv4", 00:22:08.477 "traddr": "10.0.0.2", 00:22:08.477 "trsvcid": "4420" 00:22:08.477 }, 00:22:08.477 "peer_address": { 00:22:08.477 "trtype": "TCP", 00:22:08.477 "adrfam": "IPv4", 00:22:08.477 "traddr": "10.0.0.1", 00:22:08.477 "trsvcid": "33502" 00:22:08.477 }, 00:22:08.477 "auth": { 00:22:08.477 "state": "completed", 00:22:08.477 "digest": "sha512", 00:22:08.477 "dhgroup": "ffdhe6144" 00:22:08.477 } 00:22:08.477 } 00:22:08.477 ]' 00:22:08.477 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.477 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.477 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:08.477 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:08.477 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:08.477 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.477 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.477 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.042 14:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjNiYTA0MmIzNzM1YTIyYmI5YjYzOGRhYWJiNTQ4ZDCTSXJ2: --dhchap-ctrl-secret DHHC-1:02:MWU0NDYyNGIzNjI4YWIxMWI2NzBhNWE3MWVlM2MxZjg1MjBkYTM1NzUyZTE5NWUzcFigkQ==: 00:22:09.974 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.974 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.974 14:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.974 14:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.974 14:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.974 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:09.974 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.974 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:10.232 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:10.232 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:10.232 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:10.232 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:10.232 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:10.232 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.232 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.232 14:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.232 14:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.232 14:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.232 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.232 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.828 00:22:10.828 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:10.828 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:10.828 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.110 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.110 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.110 14:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.110 14:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.110 14:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.110 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:11.110 { 00:22:11.110 "cntlid": 133, 00:22:11.110 "qid": 0, 00:22:11.110 "state": "enabled", 00:22:11.110 "thread": "nvmf_tgt_poll_group_000", 00:22:11.110 "listen_address": { 00:22:11.110 "trtype": "TCP", 00:22:11.110 "adrfam": "IPv4", 00:22:11.110 "traddr": "10.0.0.2", 00:22:11.110 "trsvcid": "4420" 00:22:11.110 }, 00:22:11.110 "peer_address": { 00:22:11.110 "trtype": "TCP", 00:22:11.110 "adrfam": "IPv4", 00:22:11.110 "traddr": "10.0.0.1", 00:22:11.110 "trsvcid": "33520" 00:22:11.110 }, 00:22:11.110 "auth": { 00:22:11.110 "state": "completed", 00:22:11.110 "digest": "sha512", 00:22:11.110 "dhgroup": "ffdhe6144" 00:22:11.110 } 00:22:11.110 } 00:22:11.110 ]' 00:22:11.110 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:11.110 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.110 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:11.110 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:11.110 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:11.110 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.110 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.111 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.367 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWQzN2E0ZjQ3YjZlZWQxNjM4NmMyMjM0N2U5MjNkNGEzZjUxNDEzMzE2MmMwMGE1LClXzg==: --dhchap-ctrl-secret DHHC-1:01:ZmQwNjE2NWY1M2Y0ZjMyMGIyMmNhY2NiZDc4YWE2MDcwRJK2: 00:22:12.301 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.301 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.301 14:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.301 14:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.301 14:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.301 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:12.301 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:12.301 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:12.559 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:12.559 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:12.559 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:12.559 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:12.559 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:12.559 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.559 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:12.559 14:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.559 14:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.559 14:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.559 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:12.559 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:13.125 00:22:13.125 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:13.125 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:13.125 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.383 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.383 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.383 14:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.383 14:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.383 14:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.383 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:13.383 { 00:22:13.383 "cntlid": 135, 00:22:13.383 "qid": 0, 00:22:13.383 "state": "enabled", 00:22:13.383 "thread": "nvmf_tgt_poll_group_000", 00:22:13.383 "listen_address": { 00:22:13.383 "trtype": "TCP", 00:22:13.383 "adrfam": "IPv4", 00:22:13.383 "traddr": "10.0.0.2", 00:22:13.383 "trsvcid": "4420" 00:22:13.383 }, 00:22:13.383 "peer_address": { 00:22:13.383 "trtype": "TCP", 00:22:13.383 "adrfam": "IPv4", 00:22:13.383 "traddr": "10.0.0.1", 00:22:13.383 "trsvcid": "33540" 00:22:13.383 }, 00:22:13.383 "auth": { 00:22:13.383 "state": "completed", 00:22:13.383 "digest": "sha512", 00:22:13.383 "dhgroup": "ffdhe6144" 00:22:13.383 } 00:22:13.383 } 00:22:13.383 ]' 00:22:13.383 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:13.383 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.383 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:13.383 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:13.383 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:13.383 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.383 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.383 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.640 14:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTE0ZDExMDQzZTMzODVlMGEyNDZhYmZlNTcyMjYyZTZlMzcxODkxMjJkMmFhMWM1NWEyNjkwOGM1M2VjNmNiOZZjMxE=: 00:22:14.570 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.570 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.570 14:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.570 14:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.828 14:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.828 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:14.828 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:14.828 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:14.828 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:15.086 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:15.086 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:15.086 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:15.086 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:15.086 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:15.086 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.086 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.086 14:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.086 14:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.086 14:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.086 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.086 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.020 00:22:16.020 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:16.020 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:16.020 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.020 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.020 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.020 14:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.020 14:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.020 14:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.020 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:16.020 { 00:22:16.020 "cntlid": 137, 00:22:16.020 "qid": 0, 00:22:16.020 "state": "enabled", 00:22:16.020 "thread": "nvmf_tgt_poll_group_000", 00:22:16.020 "listen_address": { 00:22:16.020 "trtype": "TCP", 00:22:16.020 "adrfam": "IPv4", 00:22:16.020 "traddr": "10.0.0.2", 00:22:16.020 "trsvcid": "4420" 00:22:16.020 }, 00:22:16.020 "peer_address": { 00:22:16.020 "trtype": "TCP", 00:22:16.020 "adrfam": "IPv4", 00:22:16.020 "traddr": "10.0.0.1", 00:22:16.020 "trsvcid": "42810" 00:22:16.020 }, 00:22:16.020 "auth": { 00:22:16.020 "state": "completed", 00:22:16.020 "digest": "sha512", 00:22:16.020 "dhgroup": "ffdhe8192" 00:22:16.020 } 00:22:16.020 } 00:22:16.020 ]' 00:22:16.020 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:16.020 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.020 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:16.278 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:16.278 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:16.278 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.278 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.278 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.536 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjA0ZGE0YTM0OWIwZGQ4MmZjZjI2NzhjYzljY2IxMWY5ZGYzZjQ2Yzg2MjFmYjZlFp19Nw==: --dhchap-ctrl-secret DHHC-1:03:MjZlOWY2YTM1OWY0OTFhNmM5NjMwZmZjOTk5NWM4NThlYzQwNzJmOWNhZDY1MDY4MDY0MzhhMjAwYzFkZTU3Mhggswg=: 00:22:17.469 14:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.469 14:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.469 14:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.469 14:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.469 14:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.469 14:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:17.469 14:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:17.469 14:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:17.728 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:17.728 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:17.728 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:17.728 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:17.728 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:17.728 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.728 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.728 14:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.728 14:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.728 14:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.728 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.728 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.661 00:22:18.661 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:18.661 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:18.661 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.919 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.919 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.919 14:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.919 14:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.919 14:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.919 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:18.919 { 00:22:18.919 "cntlid": 139, 00:22:18.919 "qid": 0, 00:22:18.919 "state": "enabled", 00:22:18.919 "thread": "nvmf_tgt_poll_group_000", 00:22:18.919 "listen_address": { 00:22:18.919 "trtype": "TCP", 00:22:18.919 "adrfam": "IPv4", 00:22:18.919 "traddr": "10.0.0.2", 00:22:18.919 "trsvcid": "4420" 00:22:18.919 }, 00:22:18.919 "peer_address": { 00:22:18.919 "trtype": "TCP", 00:22:18.919 "adrfam": "IPv4", 00:22:18.919 "traddr": "10.0.0.1", 00:22:18.919 "trsvcid": "42836" 00:22:18.919 }, 00:22:18.919 "auth": { 00:22:18.919 "state": "completed", 00:22:18.919 "digest": "sha512", 00:22:18.919 "dhgroup": "ffdhe8192" 00:22:18.919 } 00:22:18.919 } 00:22:18.919 ]' 00:22:18.919 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:18.919 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.920 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:18.920 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:18.920 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:19.177 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.177 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.177 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.435 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjNiYTA0MmIzNzM1YTIyYmI5YjYzOGRhYWJiNTQ4ZDCTSXJ2: --dhchap-ctrl-secret DHHC-1:02:MWU0NDYyNGIzNjI4YWIxMWI2NzBhNWE3MWVlM2MxZjg1MjBkYTM1NzUyZTE5NWUzcFigkQ==: 00:22:20.368 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.368 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.368 14:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.368 14:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.368 14:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.368 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:20.368 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:20.368 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:20.626 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:20.626 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:20.626 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:20.626 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:20.626 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:20.626 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.626 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.626 14:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.626 14:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.626 14:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.626 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.627 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.560 00:22:21.560 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:21.560 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:21.560 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.560 14:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.560 14:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.560 14:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.560 14:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.818 14:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.818 14:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:21.818 { 00:22:21.818 "cntlid": 141, 00:22:21.818 "qid": 0, 00:22:21.818 "state": "enabled", 00:22:21.818 "thread": "nvmf_tgt_poll_group_000", 00:22:21.818 "listen_address": { 00:22:21.818 "trtype": "TCP", 00:22:21.818 "adrfam": "IPv4", 00:22:21.818 "traddr": "10.0.0.2", 00:22:21.818 "trsvcid": "4420" 00:22:21.818 }, 00:22:21.818 "peer_address": { 00:22:21.818 "trtype": "TCP", 00:22:21.818 "adrfam": "IPv4", 00:22:21.818 "traddr": "10.0.0.1", 00:22:21.818 "trsvcid": "42874" 00:22:21.818 }, 00:22:21.818 "auth": { 00:22:21.818 "state": "completed", 00:22:21.818 "digest": "sha512", 00:22:21.818 "dhgroup": "ffdhe8192" 00:22:21.818 } 00:22:21.818 } 00:22:21.818 ]' 00:22:21.818 14:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:21.818 14:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.818 14:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:21.818 14:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:21.818 14:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:21.818 14:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.818 14:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.818 14:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.076 14:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWQzN2E0ZjQ3YjZlZWQxNjM4NmMyMjM0N2U5MjNkNGEzZjUxNDEzMzE2MmMwMGE1LClXzg==: --dhchap-ctrl-secret DHHC-1:01:ZmQwNjE2NWY1M2Y0ZjMyMGIyMmNhY2NiZDc4YWE2MDcwRJK2: 00:22:23.009 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.009 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.009 14:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.009 14:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.009 14:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.009 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:23.009 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:23.009 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:23.267 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:23.267 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:23.267 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:23.267 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:23.267 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:23.267 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.267 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:23.267 14:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.267 14:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.267 14:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.267 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:23.267 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:24.197 00:22:24.197 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:24.197 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:24.197 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.455 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.455 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.455 14:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.455 14:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.455 14:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.455 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:24.455 { 00:22:24.455 "cntlid": 143, 00:22:24.455 "qid": 0, 00:22:24.455 "state": "enabled", 00:22:24.455 "thread": "nvmf_tgt_poll_group_000", 00:22:24.455 "listen_address": { 00:22:24.455 "trtype": "TCP", 00:22:24.455 "adrfam": "IPv4", 00:22:24.455 "traddr": "10.0.0.2", 00:22:24.455 "trsvcid": "4420" 00:22:24.455 }, 00:22:24.455 "peer_address": { 00:22:24.455 "trtype": "TCP", 00:22:24.455 "adrfam": "IPv4", 00:22:24.455 "traddr": "10.0.0.1", 00:22:24.455 "trsvcid": "55460" 00:22:24.455 }, 00:22:24.455 "auth": { 00:22:24.455 "state": "completed", 00:22:24.455 "digest": "sha512", 00:22:24.455 "dhgroup": "ffdhe8192" 00:22:24.455 } 00:22:24.455 } 00:22:24.455 ]' 00:22:24.455 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:24.455 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:24.455 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:24.455 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:24.455 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:24.455 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.455 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.455 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.713 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTE0ZDExMDQzZTMzODVlMGEyNDZhYmZlNTcyMjYyZTZlMzcxODkxMjJkMmFhMWM1NWEyNjkwOGM1M2VjNmNiOZZjMxE=: 00:22:25.647 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.905 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.905 14:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.905 14:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.905 14:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.905 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:25.905 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:25.905 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:25.905 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:25.905 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:25.905 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:26.163 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:26.163 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:26.163 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:26.163 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:26.163 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:26.163 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.163 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.163 14:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.163 14:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.163 14:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.163 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.163 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.098 00:22:27.098 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:27.098 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.098 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:27.356 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.356 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.356 14:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.356 14:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.356 14:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.356 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:27.356 { 00:22:27.356 "cntlid": 145, 00:22:27.356 "qid": 0, 00:22:27.356 "state": "enabled", 00:22:27.356 "thread": "nvmf_tgt_poll_group_000", 00:22:27.356 "listen_address": { 00:22:27.356 "trtype": "TCP", 00:22:27.356 "adrfam": "IPv4", 00:22:27.356 "traddr": "10.0.0.2", 00:22:27.356 "trsvcid": "4420" 00:22:27.356 }, 00:22:27.356 "peer_address": { 00:22:27.356 "trtype": "TCP", 00:22:27.356 "adrfam": "IPv4", 00:22:27.356 "traddr": "10.0.0.1", 00:22:27.356 "trsvcid": "55488" 00:22:27.356 }, 00:22:27.356 "auth": { 00:22:27.356 "state": "completed", 00:22:27.356 "digest": "sha512", 00:22:27.356 "dhgroup": "ffdhe8192" 00:22:27.356 } 00:22:27.356 } 00:22:27.356 ]' 00:22:27.356 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:27.356 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:27.356 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:27.356 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:27.356 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:27.356 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.356 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.356 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.613 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjA0ZGE0YTM0OWIwZGQ4MmZjZjI2NzhjYzljY2IxMWY5ZGYzZjQ2Yzg2MjFmYjZlFp19Nw==: --dhchap-ctrl-secret DHHC-1:03:MjZlOWY2YTM1OWY0OTFhNmM5NjMwZmZjOTk5NWM4NThlYzQwNzJmOWNhZDY1MDY4MDY0MzhhMjAwYzFkZTU3Mhggswg=: 00:22:28.547 14:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.547 14:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.547 14:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.547 14:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.547 14:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.547 14:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:28.547 14:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.547 14:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.547 14:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.547 14:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:28.547 14:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:28.547 14:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:28.547 14:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:28.547 14:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:28.547 14:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:28.547 14:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:28.547 14:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:28.547 14:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:29.481 request: 00:22:29.481 { 00:22:29.481 "name": "nvme0", 00:22:29.481 "trtype": "tcp", 00:22:29.481 "traddr": "10.0.0.2", 00:22:29.481 "adrfam": "ipv4", 00:22:29.481 "trsvcid": "4420", 00:22:29.481 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:29.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:29.482 "prchk_reftag": false, 00:22:29.482 "prchk_guard": false, 00:22:29.482 "hdgst": false, 00:22:29.482 "ddgst": false, 00:22:29.482 "dhchap_key": "key2", 00:22:29.482 "method": "bdev_nvme_attach_controller", 00:22:29.482 "req_id": 1 00:22:29.482 } 00:22:29.482 Got JSON-RPC error response 00:22:29.482 response: 00:22:29.482 { 00:22:29.482 "code": -5, 00:22:29.482 "message": "Input/output error" 00:22:29.482 } 00:22:29.482 14:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:29.482 14:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:29.482 14:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:29.482 14:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:29.482 14:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:29.482 14:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.482 14:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.482 14:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.482 14:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.482 14:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.482 14:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.482 14:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.482 14:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:29.482 14:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:29.482 14:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:29.482 14:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:29.482 14:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.482 14:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:29.482 14:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.482 14:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:29.482 14:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:30.414 request: 00:22:30.414 { 00:22:30.414 "name": "nvme0", 00:22:30.414 "trtype": "tcp", 00:22:30.414 "traddr": "10.0.0.2", 00:22:30.414 "adrfam": "ipv4", 00:22:30.414 "trsvcid": "4420", 00:22:30.414 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:30.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:30.414 "prchk_reftag": false, 00:22:30.414 "prchk_guard": false, 00:22:30.414 "hdgst": false, 00:22:30.414 "ddgst": false, 00:22:30.414 "dhchap_key": "key1", 00:22:30.414 "dhchap_ctrlr_key": "ckey2", 00:22:30.414 "method": "bdev_nvme_attach_controller", 00:22:30.414 "req_id": 1 00:22:30.414 } 00:22:30.414 Got JSON-RPC error response 00:22:30.414 response: 00:22:30.414 { 00:22:30.414 "code": -5, 00:22:30.414 "message": "Input/output error" 00:22:30.414 } 00:22:30.414 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:30.414 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:30.414 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:30.414 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:30.414 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:30.414 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.414 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.414 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.414 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:30.414 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.414 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.414 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.414 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.414 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:30.414 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.414 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:30.414 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:30.414 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:30.414 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:30.414 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.414 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.346 request: 00:22:31.347 { 00:22:31.347 "name": "nvme0", 00:22:31.347 "trtype": "tcp", 00:22:31.347 "traddr": "10.0.0.2", 00:22:31.347 "adrfam": "ipv4", 00:22:31.347 "trsvcid": "4420", 00:22:31.347 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:31.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:31.347 "prchk_reftag": false, 00:22:31.347 "prchk_guard": false, 00:22:31.347 "hdgst": false, 00:22:31.347 "ddgst": false, 00:22:31.347 "dhchap_key": "key1", 00:22:31.347 "dhchap_ctrlr_key": "ckey1", 00:22:31.347 "method": "bdev_nvme_attach_controller", 00:22:31.347 "req_id": 1 00:22:31.347 } 00:22:31.347 Got JSON-RPC error response 00:22:31.347 response: 00:22:31.347 { 00:22:31.347 "code": -5, 00:22:31.347 "message": "Input/output error" 00:22:31.347 } 00:22:31.347 14:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:31.347 14:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:31.347 14:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:31.347 14:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:31.347 14:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.347 14:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.347 14:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.347 14:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.347 14:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1390432 00:22:31.347 14:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1390432 ']' 00:22:31.347 14:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1390432 00:22:31.347 14:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:31.347 14:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:31.347 14:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1390432 00:22:31.347 14:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:31.347 14:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:31.347 14:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1390432' 00:22:31.347 killing process with pid 1390432 00:22:31.347 14:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1390432 00:22:31.347 14:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1390432 00:22:32.719 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:32.719 14:24:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:32.719 14:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:32.719 14:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.719 14:24:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1413159 00:22:32.719 14:24:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:32.719 14:24:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1413159 00:22:32.719 14:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1413159 ']' 00:22:32.719 14:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.719 14:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:32.719 14:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.719 14:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:32.719 14:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.652 14:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:33.652 14:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:33.652 14:24:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:33.652 14:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:33.652 14:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.652 14:24:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.652 14:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:33.652 14:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1413159 00:22:33.652 14:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1413159 ']' 00:22:33.652 14:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.652 14:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.652 14:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.652 14:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.652 14:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.909 14:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:33.909 14:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:33.909 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:33.909 14:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.909 14:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.166 14:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.166 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:34.166 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:34.166 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:34.166 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:34.166 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:34.166 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.166 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:34.166 14:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.166 14:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.166 14:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.166 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:34.166 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:35.098 00:22:35.098 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:35.098 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:35.098 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.356 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.356 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.356 14:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.356 14:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.356 14:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.356 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:35.356 { 00:22:35.356 "cntlid": 1, 00:22:35.356 "qid": 0, 00:22:35.356 "state": "enabled", 00:22:35.356 "thread": "nvmf_tgt_poll_group_000", 00:22:35.356 "listen_address": { 00:22:35.356 "trtype": "TCP", 00:22:35.356 "adrfam": "IPv4", 00:22:35.356 "traddr": "10.0.0.2", 00:22:35.356 "trsvcid": "4420" 00:22:35.356 }, 00:22:35.356 "peer_address": { 00:22:35.356 "trtype": "TCP", 00:22:35.356 "adrfam": "IPv4", 00:22:35.356 "traddr": "10.0.0.1", 00:22:35.356 "trsvcid": "60350" 00:22:35.356 }, 00:22:35.356 "auth": { 00:22:35.356 "state": "completed", 00:22:35.356 "digest": "sha512", 00:22:35.356 "dhgroup": "ffdhe8192" 00:22:35.356 } 00:22:35.356 } 00:22:35.356 ]' 00:22:35.356 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:35.615 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.615 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:35.615 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:35.615 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:35.615 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.615 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.615 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.873 14:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTE0ZDExMDQzZTMzODVlMGEyNDZhYmZlNTcyMjYyZTZlMzcxODkxMjJkMmFhMWM1NWEyNjkwOGM1M2VjNmNiOZZjMxE=: 00:22:36.807 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.807 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.807 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.807 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.807 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.807 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:36.807 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.807 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.807 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.807 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:36.807 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:37.065 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:37.065 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:37.065 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:37.065 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:37.065 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:37.065 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:37.065 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:37.065 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:37.065 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:37.323 request: 00:22:37.323 { 00:22:37.323 "name": "nvme0", 00:22:37.323 "trtype": "tcp", 00:22:37.323 "traddr": "10.0.0.2", 00:22:37.323 "adrfam": "ipv4", 00:22:37.323 "trsvcid": "4420", 00:22:37.323 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:37.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:37.323 "prchk_reftag": false, 00:22:37.323 "prchk_guard": false, 00:22:37.323 "hdgst": false, 00:22:37.323 "ddgst": false, 00:22:37.323 "dhchap_key": "key3", 00:22:37.323 "method": "bdev_nvme_attach_controller", 00:22:37.323 "req_id": 1 00:22:37.323 } 00:22:37.323 Got JSON-RPC error response 00:22:37.323 response: 00:22:37.323 { 00:22:37.323 "code": -5, 00:22:37.323 "message": "Input/output error" 00:22:37.323 } 00:22:37.323 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:37.323 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:37.323 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:37.323 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:37.323 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:37.323 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:37.323 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:37.323 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:37.581 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:37.581 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:37.581 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:37.581 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:37.581 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:37.581 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:37.581 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:37.581 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:37.581 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:37.839 request: 00:22:37.839 { 00:22:37.839 "name": "nvme0", 00:22:37.839 "trtype": "tcp", 00:22:37.839 "traddr": "10.0.0.2", 00:22:37.839 "adrfam": "ipv4", 00:22:37.839 "trsvcid": "4420", 00:22:37.839 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:37.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:37.839 "prchk_reftag": false, 00:22:37.839 "prchk_guard": false, 00:22:37.839 "hdgst": false, 00:22:37.839 "ddgst": false, 00:22:37.839 "dhchap_key": "key3", 00:22:37.839 "method": "bdev_nvme_attach_controller", 00:22:37.839 "req_id": 1 00:22:37.840 } 00:22:37.840 Got JSON-RPC error response 00:22:37.840 response: 00:22:37.840 { 00:22:37.840 "code": -5, 00:22:37.840 "message": "Input/output error" 00:22:37.840 } 00:22:37.840 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:37.840 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:37.840 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:37.840 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:37.840 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:37.840 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:37.840 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:37.840 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:37.840 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:37.840 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:38.098 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:38.098 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.098 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.098 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.098 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:38.098 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.098 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.098 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.098 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:38.098 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:38.098 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:38.098 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:38.098 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.098 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:38.098 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.098 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:38.098 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:38.356 request: 00:22:38.356 { 00:22:38.356 "name": "nvme0", 00:22:38.356 "trtype": "tcp", 00:22:38.356 "traddr": "10.0.0.2", 00:22:38.356 "adrfam": "ipv4", 00:22:38.356 "trsvcid": "4420", 00:22:38.356 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:38.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:38.356 "prchk_reftag": false, 00:22:38.356 "prchk_guard": false, 00:22:38.356 "hdgst": false, 00:22:38.356 "ddgst": false, 00:22:38.356 "dhchap_key": "key0", 00:22:38.356 "dhchap_ctrlr_key": "key1", 00:22:38.356 "method": "bdev_nvme_attach_controller", 00:22:38.356 "req_id": 1 00:22:38.356 } 00:22:38.356 Got JSON-RPC error response 00:22:38.356 response: 00:22:38.356 { 00:22:38.356 "code": -5, 00:22:38.356 "message": "Input/output error" 00:22:38.356 } 00:22:38.356 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:38.356 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:38.356 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:38.356 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:38.356 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:38.356 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:38.614 00:22:38.614 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:38.614 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:38.614 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.872 14:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.872 14:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.872 14:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.130 14:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:39.130 14:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:39.130 14:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1390583 00:22:39.130 14:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1390583 ']' 00:22:39.130 14:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1390583 00:22:39.130 14:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:39.130 14:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:39.130 14:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1390583 00:22:39.130 14:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:39.130 14:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:39.130 14:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1390583' 00:22:39.130 killing process with pid 1390583 00:22:39.130 14:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1390583 00:22:39.130 14:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1390583 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:41.683 rmmod nvme_tcp 00:22:41.683 rmmod nvme_fabrics 00:22:41.683 rmmod nvme_keyring 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1413159 ']' 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1413159 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1413159 ']' 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1413159 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1413159 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1413159' 00:22:41.683 killing process with pid 1413159 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1413159 00:22:41.683 14:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1413159 00:22:43.056 14:24:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:43.056 14:24:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:43.056 14:24:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:43.056 14:24:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:43.056 14:24:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:43.056 14:24:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.056 14:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.056 14:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.959 14:24:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:44.959 14:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Hgo /tmp/spdk.key-sha256.hWz /tmp/spdk.key-sha384.HRx /tmp/spdk.key-sha512.XPh /tmp/spdk.key-sha512.1pb /tmp/spdk.key-sha384.JON /tmp/spdk.key-sha256.SH1 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:44.959 00:22:44.959 real 3m14.976s 00:22:44.959 user 7m29.020s 00:22:44.959 sys 0m24.944s 00:22:44.959 14:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:44.959 14:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.959 ************************************ 00:22:44.959 END TEST nvmf_auth_target 00:22:44.959 ************************************ 00:22:44.959 14:24:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:44.959 14:24:54 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:44.959 14:24:54 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:44.959 14:24:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:44.959 14:24:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:44.959 14:24:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:44.959 ************************************ 00:22:44.959 START TEST nvmf_bdevio_no_huge 00:22:44.959 ************************************ 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:44.959 * Looking for test storage... 00:22:44.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:44.959 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.960 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.960 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.960 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:44.960 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:44.960 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:44.960 14:24:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:46.858 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:46.858 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:46.858 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:46.859 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:46.859 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:46.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:22:46.859 00:22:46.859 --- 10.0.0.2 ping statistics --- 00:22:46.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.859 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:46.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:22:46.859 00:22:46.859 --- 10.0.0.1 ping statistics --- 00:22:46.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.859 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1416339 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1416339 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1416339 ']' 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:46.859 14:24:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:47.117 [2024-07-10 14:24:56.422828] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:22:47.117 [2024-07-10 14:24:56.422993] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:47.375 [2024-07-10 14:24:56.601304] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:47.633 [2024-07-10 14:24:56.882667] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.633 [2024-07-10 14:24:56.882736] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.633 [2024-07-10 14:24:56.882765] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.633 [2024-07-10 14:24:56.882787] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.633 [2024-07-10 14:24:56.882808] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.633 [2024-07-10 14:24:56.882939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:47.634 [2024-07-10 14:24:56.883062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:47.634 [2024-07-10 14:24:56.883142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:47.634 [2024-07-10 14:24:56.883171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:47.892 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:47.892 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:22:47.892 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:47.892 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:47.892 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:47.892 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:47.892 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:47.892 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.892 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:47.892 [2024-07-10 14:24:57.332292] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.892 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.892 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:47.892 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.892 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:48.150 Malloc0 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:48.150 [2024-07-10 14:24:57.423254] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.150 { 00:22:48.150 "params": { 00:22:48.150 "name": "Nvme$subsystem", 00:22:48.150 "trtype": "$TEST_TRANSPORT", 00:22:48.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.150 "adrfam": "ipv4", 00:22:48.150 "trsvcid": "$NVMF_PORT", 00:22:48.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.150 "hdgst": ${hdgst:-false}, 00:22:48.150 "ddgst": ${ddgst:-false} 00:22:48.150 }, 00:22:48.150 "method": "bdev_nvme_attach_controller" 00:22:48.150 } 00:22:48.150 EOF 00:22:48.150 )") 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:48.150 14:24:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:48.150 "params": { 00:22:48.150 "name": "Nvme1", 00:22:48.150 "trtype": "tcp", 00:22:48.150 "traddr": "10.0.0.2", 00:22:48.150 "adrfam": "ipv4", 00:22:48.150 "trsvcid": "4420", 00:22:48.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:48.150 "hdgst": false, 00:22:48.150 "ddgst": false 00:22:48.150 }, 00:22:48.150 "method": "bdev_nvme_attach_controller" 00:22:48.150 }' 00:22:48.150 [2024-07-10 14:24:57.506027] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:22:48.150 [2024-07-10 14:24:57.506168] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1416490 ] 00:22:48.408 [2024-07-10 14:24:57.650611] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:48.666 [2024-07-10 14:24:57.906184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.666 [2024-07-10 14:24:57.906229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.666 [2024-07-10 14:24:57.906234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.232 I/O targets: 00:22:49.232 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:49.232 00:22:49.232 00:22:49.232 CUnit - A unit testing framework for C - Version 2.1-3 00:22:49.232 http://cunit.sourceforge.net/ 00:22:49.232 00:22:49.232 00:22:49.232 Suite: bdevio tests on: Nvme1n1 00:22:49.232 Test: blockdev write read block ...passed 00:22:49.232 Test: blockdev write zeroes read block ...passed 00:22:49.232 Test: blockdev write zeroes read no split ...passed 00:22:49.232 Test: blockdev write zeroes read split ...passed 00:22:49.232 Test: blockdev write zeroes read split partial ...passed 00:22:49.233 Test: blockdev reset ...[2024-07-10 14:24:58.618594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:49.233 [2024-07-10 14:24:58.618776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f1100 (9): Bad file descriptor 00:22:49.233 [2024-07-10 14:24:58.637672] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:49.233 passed 00:22:49.233 Test: blockdev write read 8 blocks ...passed 00:22:49.233 Test: blockdev write read size > 128k ...passed 00:22:49.233 Test: blockdev write read invalid size ...passed 00:22:49.233 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:49.233 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:49.233 Test: blockdev write read max offset ...passed 00:22:49.491 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:49.491 Test: blockdev writev readv 8 blocks ...passed 00:22:49.491 Test: blockdev writev readv 30 x 1block ...passed 00:22:49.491 Test: blockdev writev readv block ...passed 00:22:49.491 Test: blockdev writev readv size > 128k ...passed 00:22:49.491 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:49.491 Test: blockdev comparev and writev ...[2024-07-10 14:24:58.816492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:49.491 [2024-07-10 14:24:58.816577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.491 [2024-07-10 14:24:58.816619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:49.491 [2024-07-10 14:24:58.816646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.491 [2024-07-10 14:24:58.817140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:49.491 [2024-07-10 14:24:58.817176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:49.491 [2024-07-10 14:24:58.817214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:49.491 [2024-07-10 14:24:58.817242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:49.491 [2024-07-10 14:24:58.817740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:49.491 [2024-07-10 14:24:58.817774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:49.491 [2024-07-10 14:24:58.817807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:49.491 [2024-07-10 14:24:58.817837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:49.491 [2024-07-10 14:24:58.818333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:49.491 [2024-07-10 14:24:58.818366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:49.491 [2024-07-10 14:24:58.818398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:49.491 [2024-07-10 14:24:58.818423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:49.491 passed 00:22:49.491 Test: blockdev nvme passthru rw ...passed 00:22:49.491 Test: blockdev nvme passthru vendor specific ...[2024-07-10 14:24:58.900945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:49.491 [2024-07-10 14:24:58.901003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:49.491 [2024-07-10 14:24:58.901303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:49.491 [2024-07-10 14:24:58.901335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:49.491 [2024-07-10 14:24:58.901602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:49.491 [2024-07-10 14:24:58.901635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:49.492 [2024-07-10 14:24:58.901925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:49.492 [2024-07-10 14:24:58.901967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:49.492 passed 00:22:49.492 Test: blockdev nvme admin passthru ...passed 00:22:49.492 Test: blockdev copy ...passed 00:22:49.492 00:22:49.492 Run Summary: Type Total Ran Passed Failed Inactive 00:22:49.492 suites 1 1 n/a 0 0 00:22:49.492 tests 23 23 23 0 0 00:22:49.492 asserts 152 152 152 0 n/a 00:22:49.492 00:22:49.492 Elapsed time = 1.097 seconds 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:50.425 rmmod nvme_tcp 00:22:50.425 rmmod nvme_fabrics 00:22:50.425 rmmod nvme_keyring 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1416339 ']' 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1416339 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1416339 ']' 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1416339 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1416339 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1416339' 00:22:50.425 killing process with pid 1416339 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1416339 00:22:50.425 14:24:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1416339 00:22:51.360 14:25:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:51.360 14:25:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:51.360 14:25:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:51.360 14:25:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:51.360 14:25:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:51.360 14:25:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.360 14:25:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.360 14:25:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.263 14:25:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:53.263 00:22:53.263 real 0m8.460s 00:22:53.263 user 0m18.929s 00:22:53.263 sys 0m2.790s 00:22:53.263 14:25:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:53.263 14:25:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.263 ************************************ 00:22:53.263 END TEST nvmf_bdevio_no_huge 00:22:53.263 ************************************ 00:22:53.263 14:25:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:53.263 14:25:02 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:53.263 14:25:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:53.263 14:25:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.263 14:25:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:53.263 ************************************ 00:22:53.263 START TEST nvmf_tls 00:22:53.263 ************************************ 00:22:53.263 14:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:53.521 * Looking for test storage... 00:22:53.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.521 14:25:02 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:53.522 14:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.416 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:55.416 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:55.416 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:55.416 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:55.416 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:55.416 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:55.416 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:55.416 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:55.416 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:55.416 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:55.416 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:55.416 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:55.416 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:55.416 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:55.416 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:55.416 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.416 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.416 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.416 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.416 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.416 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:55.417 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:55.417 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:55.417 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:55.417 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:55.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:22:55.417 00:22:55.417 --- 10.0.0.2 ping statistics --- 00:22:55.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.417 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:55.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:22:55.417 00:22:55.417 --- 10.0.0.1 ping statistics --- 00:22:55.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.417 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1418704 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1418704 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1418704 ']' 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.417 14:25:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.675 [2024-07-10 14:25:04.980162] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:22:55.675 [2024-07-10 14:25:04.980322] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.675 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.675 [2024-07-10 14:25:05.123249] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.933 [2024-07-10 14:25:05.376019] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.933 [2024-07-10 14:25:05.376099] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.933 [2024-07-10 14:25:05.376137] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.933 [2024-07-10 14:25:05.376162] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.933 [2024-07-10 14:25:05.376206] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.933 [2024-07-10 14:25:05.376273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.499 14:25:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:56.499 14:25:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:56.499 14:25:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:56.499 14:25:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:56.499 14:25:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.499 14:25:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.499 14:25:05 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:56.499 14:25:05 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:56.756 true 00:22:56.756 14:25:06 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:56.756 14:25:06 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:57.014 14:25:06 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:57.014 14:25:06 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:57.014 14:25:06 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:57.271 14:25:06 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:57.271 14:25:06 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:57.530 14:25:06 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:57.530 14:25:06 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:57.530 14:25:06 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:57.788 14:25:07 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:57.788 14:25:07 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:58.045 14:25:07 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:58.045 14:25:07 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:58.045 14:25:07 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:58.045 14:25:07 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:58.303 14:25:07 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:58.303 14:25:07 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:58.303 14:25:07 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:58.560 14:25:07 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:58.560 14:25:07 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:58.818 14:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:58.818 14:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:58.818 14:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:59.076 14:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:59.076 14:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:59.333 14:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:59.333 14:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.bDdsj4xPUp 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.ntmXNNvI4R 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.bDdsj4xPUp 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ntmXNNvI4R 00:22:59.334 14:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:59.591 14:25:09 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:00.157 14:25:09 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.bDdsj4xPUp 00:23:00.157 14:25:09 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.bDdsj4xPUp 00:23:00.157 14:25:09 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:00.722 [2024-07-10 14:25:09.898718] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.722 14:25:09 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:00.722 14:25:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:00.981 [2024-07-10 14:25:10.400372] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:00.981 [2024-07-10 14:25:10.400742] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.981 14:25:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:01.547 malloc0 00:23:01.547 14:25:10 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:01.804 14:25:11 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bDdsj4xPUp 00:23:01.804 [2024-07-10 14:25:11.279512] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:02.062 14:25:11 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.bDdsj4xPUp 00:23:02.062 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.028 Initializing NVMe Controllers 00:23:12.028 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:12.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:12.028 Initialization complete. Launching workers. 00:23:12.028 ======================================================== 00:23:12.028 Latency(us) 00:23:12.028 Device Information : IOPS MiB/s Average min max 00:23:12.028 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5482.29 21.42 11679.31 2119.79 12946.89 00:23:12.028 ======================================================== 00:23:12.028 Total : 5482.29 21.42 11679.31 2119.79 12946.89 00:23:12.028 00:23:12.284 14:25:21 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bDdsj4xPUp 00:23:12.284 14:25:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:12.284 14:25:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:12.284 14:25:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:12.284 14:25:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bDdsj4xPUp' 00:23:12.284 14:25:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:12.284 14:25:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1420719 00:23:12.284 14:25:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:12.284 14:25:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:12.284 14:25:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1420719 /var/tmp/bdevperf.sock 00:23:12.284 14:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1420719 ']' 00:23:12.284 14:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:12.284 14:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:12.284 14:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:12.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:12.284 14:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:12.284 14:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.284 [2024-07-10 14:25:21.609108] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:23:12.284 [2024-07-10 14:25:21.609263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1420719 ] 00:23:12.284 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.284 [2024-07-10 14:25:21.730401] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.540 [2024-07-10 14:25:21.958854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.104 14:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:13.104 14:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:13.104 14:25:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bDdsj4xPUp 00:23:13.361 [2024-07-10 14:25:22.807640] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.361 [2024-07-10 14:25:22.807861] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:13.618 TLSTESTn1 00:23:13.618 14:25:22 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:13.618 Running I/O for 10 seconds... 00:23:23.661 00:23:23.661 Latency(us) 00:23:23.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.661 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:23.661 Verification LBA range: start 0x0 length 0x2000 00:23:23.661 TLSTESTn1 : 10.05 2617.29 10.22 0.00 0.00 48765.17 12913.02 68351.62 00:23:23.661 =================================================================================================================== 00:23:23.661 Total : 2617.29 10.22 0.00 0.00 48765.17 12913.02 68351.62 00:23:23.661 0 00:23:23.661 14:25:33 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:23.661 14:25:33 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1420719 00:23:23.661 14:25:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1420719 ']' 00:23:23.661 14:25:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1420719 00:23:23.661 14:25:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:23.661 14:25:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:23.661 14:25:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1420719 00:23:23.918 14:25:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:23.918 14:25:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:23.918 14:25:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1420719' 00:23:23.918 killing process with pid 1420719 00:23:23.918 14:25:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1420719 00:23:23.918 Received shutdown signal, test time was about 10.000000 seconds 00:23:23.918 00:23:23.918 Latency(us) 00:23:23.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.918 =================================================================================================================== 00:23:23.918 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:23.918 [2024-07-10 14:25:33.147505] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:23.918 14:25:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1420719 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ntmXNNvI4R 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ntmXNNvI4R 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ntmXNNvI4R 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ntmXNNvI4R' 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1422176 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1422176 /var/tmp/bdevperf.sock 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1422176 ']' 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.850 14:25:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.850 [2024-07-10 14:25:34.210595] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:23:24.850 [2024-07-10 14:25:34.210805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422176 ] 00:23:24.850 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.108 [2024-07-10 14:25:34.335330] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.108 [2024-07-10 14:25:34.556628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.039 14:25:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.039 14:25:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:26.039 14:25:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ntmXNNvI4R 00:23:26.039 [2024-07-10 14:25:35.417175] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:26.039 [2024-07-10 14:25:35.417377] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:26.039 [2024-07-10 14:25:35.427610] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:26.039 [2024-07-10 14:25:35.428273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:26.039 [2024-07-10 14:25:35.429234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:26.039 [2024-07-10 14:25:35.430227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:26.039 [2024-07-10 14:25:35.430264] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:26.039 [2024-07-10 14:25:35.430304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:26.039 request: 00:23:26.039 { 00:23:26.039 "name": "TLSTEST", 00:23:26.039 "trtype": "tcp", 00:23:26.039 "traddr": "10.0.0.2", 00:23:26.039 "adrfam": "ipv4", 00:23:26.039 "trsvcid": "4420", 00:23:26.039 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.039 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.039 "prchk_reftag": false, 00:23:26.039 "prchk_guard": false, 00:23:26.039 "hdgst": false, 00:23:26.039 "ddgst": false, 00:23:26.039 "psk": "/tmp/tmp.ntmXNNvI4R", 00:23:26.039 "method": "bdev_nvme_attach_controller", 00:23:26.039 "req_id": 1 00:23:26.039 } 00:23:26.039 Got JSON-RPC error response 00:23:26.039 response: 00:23:26.039 { 00:23:26.039 "code": -5, 00:23:26.039 "message": "Input/output error" 00:23:26.039 } 00:23:26.039 14:25:35 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1422176 00:23:26.039 14:25:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1422176 ']' 00:23:26.039 14:25:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1422176 00:23:26.039 14:25:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:26.039 14:25:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:26.039 14:25:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1422176 00:23:26.039 14:25:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:26.039 14:25:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:26.039 14:25:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1422176' 00:23:26.039 killing process with pid 1422176 00:23:26.039 14:25:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1422176 00:23:26.039 Received shutdown signal, test time was about 10.000000 seconds 00:23:26.039 00:23:26.039 Latency(us) 00:23:26.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.039 =================================================================================================================== 00:23:26.039 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:26.039 14:25:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1422176 00:23:26.039 [2024-07-10 14:25:35.474219] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:26.970 14:25:36 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:26.970 14:25:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:26.970 14:25:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:26.970 14:25:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:26.970 14:25:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:26.970 14:25:36 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bDdsj4xPUp 00:23:26.970 14:25:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:26.970 14:25:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bDdsj4xPUp 00:23:26.970 14:25:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:26.970 14:25:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:26.970 14:25:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:26.970 14:25:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:26.970 14:25:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bDdsj4xPUp 00:23:26.970 14:25:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:26.970 14:25:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:26.970 14:25:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:26.970 14:25:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bDdsj4xPUp' 00:23:26.970 14:25:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:26.971 14:25:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1422446 00:23:26.971 14:25:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:26.971 14:25:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:26.971 14:25:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1422446 /var/tmp/bdevperf.sock 00:23:26.971 14:25:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1422446 ']' 00:23:26.971 14:25:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.971 14:25:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.971 14:25:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.971 14:25:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.971 14:25:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.228 [2024-07-10 14:25:36.481442] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:23:27.228 [2024-07-10 14:25:36.481617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422446 ] 00:23:27.228 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.228 [2024-07-10 14:25:36.604558] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.485 [2024-07-10 14:25:36.829608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.048 14:25:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:28.048 14:25:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:28.048 14:25:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.bDdsj4xPUp 00:23:28.305 [2024-07-10 14:25:37.698366] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:28.305 [2024-07-10 14:25:37.698601] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:28.305 [2024-07-10 14:25:37.709949] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:28.305 [2024-07-10 14:25:37.710010] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:28.305 [2024-07-10 14:25:37.710092] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:28.305 [2024-07-10 14:25:37.711030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:28.305 [2024-07-10 14:25:37.711989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:28.305 [2024-07-10 14:25:37.712989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:28.305 [2024-07-10 14:25:37.713018] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:28.305 [2024-07-10 14:25:37.713050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:28.305 request: 00:23:28.305 { 00:23:28.305 "name": "TLSTEST", 00:23:28.305 "trtype": "tcp", 00:23:28.305 "traddr": "10.0.0.2", 00:23:28.305 "adrfam": "ipv4", 00:23:28.305 "trsvcid": "4420", 00:23:28.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.305 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:28.305 "prchk_reftag": false, 00:23:28.305 "prchk_guard": false, 00:23:28.305 "hdgst": false, 00:23:28.305 "ddgst": false, 00:23:28.305 "psk": "/tmp/tmp.bDdsj4xPUp", 00:23:28.305 "method": "bdev_nvme_attach_controller", 00:23:28.305 "req_id": 1 00:23:28.305 } 00:23:28.305 Got JSON-RPC error response 00:23:28.305 response: 00:23:28.305 { 00:23:28.305 "code": -5, 00:23:28.305 "message": "Input/output error" 00:23:28.305 } 00:23:28.305 14:25:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1422446 00:23:28.305 14:25:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1422446 ']' 00:23:28.305 14:25:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1422446 00:23:28.305 14:25:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:28.305 14:25:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:28.305 14:25:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1422446 00:23:28.305 14:25:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:28.305 14:25:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:28.305 14:25:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1422446' 00:23:28.305 killing process with pid 1422446 00:23:28.305 14:25:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1422446 00:23:28.305 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.305 00:23:28.305 Latency(us) 00:23:28.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.305 =================================================================================================================== 00:23:28.305 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:28.305 [2024-07-10 14:25:37.761134] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:28.305 14:25:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1422446 00:23:29.235 14:25:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:29.235 14:25:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:29.235 14:25:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:29.235 14:25:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:29.235 14:25:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:29.235 14:25:38 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bDdsj4xPUp 00:23:29.235 14:25:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:29.493 14:25:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bDdsj4xPUp 00:23:29.493 14:25:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:29.493 14:25:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:29.493 14:25:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:29.493 14:25:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:29.493 14:25:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bDdsj4xPUp 00:23:29.493 14:25:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:29.493 14:25:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:29.493 14:25:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:29.493 14:25:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bDdsj4xPUp' 00:23:29.493 14:25:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:29.493 14:25:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1422718 00:23:29.493 14:25:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:29.493 14:25:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:29.493 14:25:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1422718 /var/tmp/bdevperf.sock 00:23:29.493 14:25:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1422718 ']' 00:23:29.493 14:25:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.493 14:25:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.493 14:25:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.493 14:25:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.493 14:25:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.493 [2024-07-10 14:25:38.799881] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:23:29.493 [2024-07-10 14:25:38.800045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422718 ] 00:23:29.493 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.493 [2024-07-10 14:25:38.922596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.751 [2024-07-10 14:25:39.154157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.317 14:25:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.317 14:25:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:30.317 14:25:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bDdsj4xPUp 00:23:30.575 [2024-07-10 14:25:40.019583] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:30.575 [2024-07-10 14:25:40.019818] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:30.575 [2024-07-10 14:25:40.030034] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:30.575 [2024-07-10 14:25:40.030076] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:30.575 [2024-07-10 14:25:40.030138] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:30.575 [2024-07-10 14:25:40.031067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:30.575 [2024-07-10 14:25:40.032044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:30.575 [2024-07-10 14:25:40.033037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:30.575 [2024-07-10 14:25:40.033070] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:30.575 [2024-07-10 14:25:40.033109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:30.575 request: 00:23:30.575 { 00:23:30.575 "name": "TLSTEST", 00:23:30.575 "trtype": "tcp", 00:23:30.575 "traddr": "10.0.0.2", 00:23:30.575 "adrfam": "ipv4", 00:23:30.575 "trsvcid": "4420", 00:23:30.575 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:30.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:30.575 "prchk_reftag": false, 00:23:30.575 "prchk_guard": false, 00:23:30.575 "hdgst": false, 00:23:30.575 "ddgst": false, 00:23:30.575 "psk": "/tmp/tmp.bDdsj4xPUp", 00:23:30.575 "method": "bdev_nvme_attach_controller", 00:23:30.575 "req_id": 1 00:23:30.575 } 00:23:30.575 Got JSON-RPC error response 00:23:30.575 response: 00:23:30.575 { 00:23:30.575 "code": -5, 00:23:30.575 "message": "Input/output error" 00:23:30.575 } 00:23:30.575 14:25:40 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1422718 00:23:30.575 14:25:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1422718 ']' 00:23:30.575 14:25:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1422718 00:23:30.833 14:25:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:30.833 14:25:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:30.833 14:25:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1422718 00:23:30.833 14:25:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:30.833 14:25:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:30.833 14:25:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1422718' 00:23:30.833 killing process with pid 1422718 00:23:30.833 14:25:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1422718 00:23:30.833 Received shutdown signal, test time was about 10.000000 seconds 00:23:30.833 00:23:30.833 Latency(us) 00:23:30.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.833 =================================================================================================================== 00:23:30.833 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:30.833 [2024-07-10 14:25:40.085813] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:30.833 14:25:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1422718 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1422995 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1422995 /var/tmp/bdevperf.sock 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1422995 ']' 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:31.767 14:25:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.767 [2024-07-10 14:25:41.113866] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:23:31.767 [2024-07-10 14:25:41.114027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422995 ] 00:23:31.767 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.767 [2024-07-10 14:25:41.239472] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.025 [2024-07-10 14:25:41.469461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.957 14:25:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:32.957 14:25:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:32.957 14:25:42 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:32.957 [2024-07-10 14:25:42.332105] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:32.957 [2024-07-10 14:25:42.334225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:23:32.957 [2024-07-10 14:25:42.335210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:32.957 [2024-07-10 14:25:42.335246] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:32.957 [2024-07-10 14:25:42.335270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:32.957 request: 00:23:32.957 { 00:23:32.957 "name": "TLSTEST", 00:23:32.957 "trtype": "tcp", 00:23:32.957 "traddr": "10.0.0.2", 00:23:32.957 "adrfam": "ipv4", 00:23:32.957 "trsvcid": "4420", 00:23:32.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:32.957 "prchk_reftag": false, 00:23:32.957 "prchk_guard": false, 00:23:32.957 "hdgst": false, 00:23:32.957 "ddgst": false, 00:23:32.957 "method": "bdev_nvme_attach_controller", 00:23:32.957 "req_id": 1 00:23:32.957 } 00:23:32.957 Got JSON-RPC error response 00:23:32.957 response: 00:23:32.957 { 00:23:32.957 "code": -5, 00:23:32.957 "message": "Input/output error" 00:23:32.957 } 00:23:32.957 14:25:42 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1422995 00:23:32.957 14:25:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1422995 ']' 00:23:32.957 14:25:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1422995 00:23:32.957 14:25:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:32.957 14:25:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:32.957 14:25:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1422995 00:23:32.957 14:25:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:32.957 14:25:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:32.957 14:25:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1422995' 00:23:32.957 killing process with pid 1422995 00:23:32.958 14:25:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1422995 00:23:32.958 Received shutdown signal, test time was about 10.000000 seconds 00:23:32.958 00:23:32.958 Latency(us) 00:23:32.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.958 =================================================================================================================== 00:23:32.958 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:32.958 14:25:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1422995 00:23:33.890 14:25:43 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:33.890 14:25:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:33.890 14:25:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:33.890 14:25:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:33.890 14:25:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:33.890 14:25:43 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1418704 00:23:33.890 14:25:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1418704 ']' 00:23:33.890 14:25:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1418704 00:23:33.890 14:25:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:33.890 14:25:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:33.890 14:25:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1418704 00:23:33.890 14:25:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:33.890 14:25:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:33.890 14:25:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1418704' 00:23:33.890 killing process with pid 1418704 00:23:33.890 14:25:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1418704 00:23:33.890 [2024-07-10 14:25:43.354384] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:33.890 14:25:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1418704 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.wNKYQGCZLl 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.wNKYQGCZLl 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1423417 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1423417 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1423417 ']' 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:35.786 14:25:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.786 [2024-07-10 14:25:44.939251] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:23:35.786 [2024-07-10 14:25:44.939386] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.786 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.786 [2024-07-10 14:25:45.070632] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.044 [2024-07-10 14:25:45.319457] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.044 [2024-07-10 14:25:45.319528] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.044 [2024-07-10 14:25:45.319556] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.044 [2024-07-10 14:25:45.319580] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.044 [2024-07-10 14:25:45.319602] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.044 [2024-07-10 14:25:45.319654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.609 14:25:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:36.609 14:25:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:36.609 14:25:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:36.609 14:25:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:36.609 14:25:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.609 14:25:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.609 14:25:45 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.wNKYQGCZLl 00:23:36.609 14:25:45 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wNKYQGCZLl 00:23:36.609 14:25:45 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:36.866 [2024-07-10 14:25:46.150404] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.866 14:25:46 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:37.123 14:25:46 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:37.379 [2024-07-10 14:25:46.691914] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:37.379 [2024-07-10 14:25:46.692194] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.379 14:25:46 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:37.637 malloc0 00:23:37.637 14:25:47 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:37.894 14:25:47 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wNKYQGCZLl 00:23:38.151 [2024-07-10 14:25:47.574069] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:38.151 14:25:47 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wNKYQGCZLl 00:23:38.151 14:25:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:38.151 14:25:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:38.151 14:25:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:38.151 14:25:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wNKYQGCZLl' 00:23:38.151 14:25:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:38.151 14:25:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1423825 00:23:38.151 14:25:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:38.151 14:25:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:38.151 14:25:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1423825 /var/tmp/bdevperf.sock 00:23:38.151 14:25:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1423825 ']' 00:23:38.151 14:25:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.151 14:25:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:38.151 14:25:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.151 14:25:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:38.151 14:25:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.409 [2024-07-10 14:25:47.676280] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:23:38.409 [2024-07-10 14:25:47.676450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1423825 ] 00:23:38.409 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.409 [2024-07-10 14:25:47.800141] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.667 [2024-07-10 14:25:48.021144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.232 14:25:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:39.232 14:25:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:39.232 14:25:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wNKYQGCZLl 00:23:39.489 [2024-07-10 14:25:48.872348] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.489 [2024-07-10 14:25:48.872580] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:39.489 TLSTESTn1 00:23:39.746 14:25:48 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:39.746 Running I/O for 10 seconds... 00:23:49.713 00:23:49.713 Latency(us) 00:23:49.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.713 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:49.713 Verification LBA range: start 0x0 length 0x2000 00:23:49.713 TLSTESTn1 : 10.05 2559.81 10.00 0.00 0.00 49854.87 10097.40 69128.34 00:23:49.713 =================================================================================================================== 00:23:49.713 Total : 2559.81 10.00 0.00 0.00 49854.87 10097.40 69128.34 00:23:49.713 0 00:23:49.713 14:25:59 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:49.713 14:25:59 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1423825 00:23:49.713 14:25:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1423825 ']' 00:23:49.713 14:25:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1423825 00:23:49.970 14:25:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:49.970 14:25:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.970 14:25:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1423825 00:23:49.970 14:25:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:49.970 14:25:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:49.970 14:25:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1423825' 00:23:49.970 killing process with pid 1423825 00:23:49.970 14:25:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1423825 00:23:49.970 Received shutdown signal, test time was about 10.000000 seconds 00:23:49.970 00:23:49.970 Latency(us) 00:23:49.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.970 =================================================================================================================== 00:23:49.970 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:49.970 [2024-07-10 14:25:59.226800] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:49.970 14:25:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1423825 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.wNKYQGCZLl 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wNKYQGCZLl 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wNKYQGCZLl 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wNKYQGCZLl 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wNKYQGCZLl' 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1425273 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1425273 /var/tmp/bdevperf.sock 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1425273 ']' 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:50.902 14:26:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.902 [2024-07-10 14:26:00.273262] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:23:50.902 [2024-07-10 14:26:00.273423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1425273 ] 00:23:50.902 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.159 [2024-07-10 14:26:00.426503] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.417 [2024-07-10 14:26:00.659822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.981 14:26:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:51.981 14:26:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:51.981 14:26:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wNKYQGCZLl 00:23:52.238 [2024-07-10 14:26:01.569739] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:52.238 [2024-07-10 14:26:01.569842] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:52.238 [2024-07-10 14:26:01.569863] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.wNKYQGCZLl 00:23:52.238 request: 00:23:52.238 { 00:23:52.238 "name": "TLSTEST", 00:23:52.238 "trtype": "tcp", 00:23:52.238 "traddr": "10.0.0.2", 00:23:52.238 "adrfam": "ipv4", 00:23:52.238 "trsvcid": "4420", 00:23:52.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.239 "prchk_reftag": false, 00:23:52.239 "prchk_guard": false, 00:23:52.239 "hdgst": false, 00:23:52.239 "ddgst": false, 00:23:52.239 "psk": "/tmp/tmp.wNKYQGCZLl", 00:23:52.239 "method": "bdev_nvme_attach_controller", 00:23:52.239 "req_id": 1 00:23:52.239 } 00:23:52.239 Got JSON-RPC error response 00:23:52.239 response: 00:23:52.239 { 00:23:52.239 "code": -1, 00:23:52.239 "message": "Operation not permitted" 00:23:52.239 } 00:23:52.239 14:26:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1425273 00:23:52.239 14:26:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1425273 ']' 00:23:52.239 14:26:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1425273 00:23:52.239 14:26:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:52.239 14:26:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:52.239 14:26:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1425273 00:23:52.239 14:26:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:52.239 14:26:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:52.239 14:26:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1425273' 00:23:52.239 killing process with pid 1425273 00:23:52.239 14:26:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1425273 00:23:52.239 Received shutdown signal, test time was about 10.000000 seconds 00:23:52.239 00:23:52.239 Latency(us) 00:23:52.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.239 =================================================================================================================== 00:23:52.239 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:52.239 14:26:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1425273 00:23:53.170 14:26:02 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:53.170 14:26:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:53.170 14:26:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:53.170 14:26:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:53.170 14:26:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:53.170 14:26:02 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1423417 00:23:53.170 14:26:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1423417 ']' 00:23:53.170 14:26:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1423417 00:23:53.170 14:26:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:53.170 14:26:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:53.170 14:26:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1423417 00:23:53.170 14:26:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:53.170 14:26:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:53.170 14:26:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1423417' 00:23:53.170 killing process with pid 1423417 00:23:53.170 14:26:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1423417 00:23:53.170 [2024-07-10 14:26:02.632111] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:53.170 14:26:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1423417 00:23:55.067 14:26:04 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:55.067 14:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:55.067 14:26:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:55.067 14:26:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.067 14:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1425692 00:23:55.067 14:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:55.067 14:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1425692 00:23:55.067 14:26:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1425692 ']' 00:23:55.067 14:26:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.067 14:26:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:55.067 14:26:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.067 14:26:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:55.067 14:26:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.067 [2024-07-10 14:26:04.183344] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:23:55.067 [2024-07-10 14:26:04.183513] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.067 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.067 [2024-07-10 14:26:04.313636] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.324 [2024-07-10 14:26:04.565496] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.324 [2024-07-10 14:26:04.565567] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.324 [2024-07-10 14:26:04.565595] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.324 [2024-07-10 14:26:04.565620] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.324 [2024-07-10 14:26:04.565641] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.324 [2024-07-10 14:26:04.565697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.889 14:26:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:55.889 14:26:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:55.889 14:26:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:55.889 14:26:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:55.889 14:26:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.889 14:26:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.889 14:26:05 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.wNKYQGCZLl 00:23:55.889 14:26:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:55.889 14:26:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.wNKYQGCZLl 00:23:55.889 14:26:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:55.889 14:26:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:55.889 14:26:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:55.889 14:26:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:55.889 14:26:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.wNKYQGCZLl 00:23:55.889 14:26:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wNKYQGCZLl 00:23:55.889 14:26:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:55.889 [2024-07-10 14:26:05.349622] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.889 14:26:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:56.147 14:26:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:56.404 [2024-07-10 14:26:05.838985] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:56.404 [2024-07-10 14:26:05.839260] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.404 14:26:05 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:56.661 malloc0 00:23:56.661 14:26:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:56.919 14:26:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wNKYQGCZLl 00:23:57.177 [2024-07-10 14:26:06.607070] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:57.177 [2024-07-10 14:26:06.607139] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:57.177 [2024-07-10 14:26:06.607197] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:57.177 request: 00:23:57.177 { 00:23:57.177 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.177 "host": "nqn.2016-06.io.spdk:host1", 00:23:57.177 "psk": "/tmp/tmp.wNKYQGCZLl", 00:23:57.177 "method": "nvmf_subsystem_add_host", 00:23:57.177 "req_id": 1 00:23:57.177 } 00:23:57.177 Got JSON-RPC error response 00:23:57.177 response: 00:23:57.177 { 00:23:57.177 "code": -32603, 00:23:57.177 "message": "Internal error" 00:23:57.177 } 00:23:57.177 14:26:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:57.177 14:26:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:57.177 14:26:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:57.177 14:26:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:57.177 14:26:06 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1425692 00:23:57.177 14:26:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1425692 ']' 00:23:57.177 14:26:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1425692 00:23:57.177 14:26:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:57.177 14:26:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:57.177 14:26:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1425692 00:23:57.177 14:26:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:57.177 14:26:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:57.177 14:26:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1425692' 00:23:57.177 killing process with pid 1425692 00:23:57.177 14:26:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1425692 00:23:57.177 14:26:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1425692 00:23:58.550 14:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.wNKYQGCZLl 00:23:58.550 14:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:58.550 14:26:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:58.550 14:26:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:58.550 14:26:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.550 14:26:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1426242 00:23:58.550 14:26:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:58.550 14:26:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1426242 00:23:58.550 14:26:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1426242 ']' 00:23:58.550 14:26:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.550 14:26:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:58.550 14:26:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.550 14:26:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:58.550 14:26:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.813 [2024-07-10 14:26:08.068792] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:23:58.813 [2024-07-10 14:26:08.068924] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.813 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.813 [2024-07-10 14:26:08.203772] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.102 [2024-07-10 14:26:08.460600] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.102 [2024-07-10 14:26:08.460698] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.102 [2024-07-10 14:26:08.460728] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.102 [2024-07-10 14:26:08.460753] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.102 [2024-07-10 14:26:08.460776] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.102 [2024-07-10 14:26:08.460826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.702 14:26:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:59.702 14:26:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:59.702 14:26:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:59.702 14:26:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:59.702 14:26:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.702 14:26:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.702 14:26:09 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.wNKYQGCZLl 00:23:59.702 14:26:09 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wNKYQGCZLl 00:23:59.702 14:26:09 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:59.958 [2024-07-10 14:26:09.301365] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.958 14:26:09 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:00.215 14:26:09 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:00.473 [2024-07-10 14:26:09.887010] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:00.473 [2024-07-10 14:26:09.887295] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.473 14:26:09 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:00.731 malloc0 00:24:00.988 14:26:10 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:01.247 14:26:10 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wNKYQGCZLl 00:24:01.505 [2024-07-10 14:26:10.793947] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:01.505 14:26:10 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1426539 00:24:01.505 14:26:10 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:01.505 14:26:10 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:01.505 14:26:10 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1426539 /var/tmp/bdevperf.sock 00:24:01.505 14:26:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1426539 ']' 00:24:01.505 14:26:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.505 14:26:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.505 14:26:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.505 14:26:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.505 14:26:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.505 [2024-07-10 14:26:10.891293] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:24:01.505 [2024-07-10 14:26:10.891453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1426539 ] 00:24:01.505 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.763 [2024-07-10 14:26:11.013098] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.763 [2024-07-10 14:26:11.239075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.329 14:26:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:02.329 14:26:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:02.329 14:26:11 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wNKYQGCZLl 00:24:02.586 [2024-07-10 14:26:12.010438] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:02.586 [2024-07-10 14:26:12.010639] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:02.844 TLSTESTn1 00:24:02.845 14:26:12 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:03.103 14:26:12 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:24:03.103 "subsystems": [ 00:24:03.103 { 00:24:03.103 "subsystem": "keyring", 00:24:03.103 "config": [] 00:24:03.103 }, 00:24:03.103 { 00:24:03.103 "subsystem": "iobuf", 00:24:03.103 "config": [ 00:24:03.103 { 00:24:03.103 "method": "iobuf_set_options", 00:24:03.103 "params": { 00:24:03.103 "small_pool_count": 8192, 00:24:03.103 "large_pool_count": 1024, 00:24:03.103 "small_bufsize": 8192, 00:24:03.103 "large_bufsize": 135168 00:24:03.103 } 00:24:03.103 } 00:24:03.103 ] 00:24:03.103 }, 00:24:03.103 { 00:24:03.103 "subsystem": "sock", 00:24:03.103 "config": [ 00:24:03.103 { 00:24:03.103 "method": "sock_set_default_impl", 00:24:03.103 "params": { 00:24:03.103 "impl_name": "posix" 00:24:03.103 } 00:24:03.103 }, 00:24:03.103 { 00:24:03.103 "method": "sock_impl_set_options", 00:24:03.103 "params": { 00:24:03.103 "impl_name": "ssl", 00:24:03.103 "recv_buf_size": 4096, 00:24:03.103 "send_buf_size": 4096, 00:24:03.103 "enable_recv_pipe": true, 00:24:03.103 "enable_quickack": false, 00:24:03.103 "enable_placement_id": 0, 00:24:03.103 "enable_zerocopy_send_server": true, 00:24:03.103 "enable_zerocopy_send_client": false, 00:24:03.103 "zerocopy_threshold": 0, 00:24:03.103 "tls_version": 0, 00:24:03.103 "enable_ktls": false 00:24:03.103 } 00:24:03.103 }, 00:24:03.103 { 00:24:03.103 "method": "sock_impl_set_options", 00:24:03.103 "params": { 00:24:03.103 "impl_name": "posix", 00:24:03.103 "recv_buf_size": 2097152, 00:24:03.103 "send_buf_size": 2097152, 00:24:03.103 "enable_recv_pipe": true, 00:24:03.103 "enable_quickack": false, 00:24:03.103 "enable_placement_id": 0, 00:24:03.103 "enable_zerocopy_send_server": true, 00:24:03.103 "enable_zerocopy_send_client": false, 00:24:03.103 "zerocopy_threshold": 0, 00:24:03.103 "tls_version": 0, 00:24:03.103 "enable_ktls": false 00:24:03.103 } 00:24:03.103 } 00:24:03.103 ] 00:24:03.103 }, 00:24:03.103 { 00:24:03.103 "subsystem": "vmd", 00:24:03.103 "config": [] 00:24:03.103 }, 00:24:03.103 { 00:24:03.103 "subsystem": "accel", 00:24:03.103 "config": [ 00:24:03.103 { 00:24:03.103 "method": "accel_set_options", 00:24:03.103 "params": { 00:24:03.103 "small_cache_size": 128, 00:24:03.103 "large_cache_size": 16, 00:24:03.103 "task_count": 2048, 00:24:03.103 "sequence_count": 2048, 00:24:03.103 "buf_count": 2048 00:24:03.103 } 00:24:03.103 } 00:24:03.103 ] 00:24:03.103 }, 00:24:03.103 { 00:24:03.103 "subsystem": "bdev", 00:24:03.103 "config": [ 00:24:03.103 { 00:24:03.103 "method": "bdev_set_options", 00:24:03.103 "params": { 00:24:03.103 "bdev_io_pool_size": 65535, 00:24:03.103 "bdev_io_cache_size": 256, 00:24:03.103 "bdev_auto_examine": true, 00:24:03.103 "iobuf_small_cache_size": 128, 00:24:03.103 "iobuf_large_cache_size": 16 00:24:03.103 } 00:24:03.103 }, 00:24:03.103 { 00:24:03.103 "method": "bdev_raid_set_options", 00:24:03.103 "params": { 00:24:03.103 "process_window_size_kb": 1024 00:24:03.103 } 00:24:03.103 }, 00:24:03.103 { 00:24:03.103 "method": "bdev_iscsi_set_options", 00:24:03.103 "params": { 00:24:03.103 "timeout_sec": 30 00:24:03.103 } 00:24:03.103 }, 00:24:03.103 { 00:24:03.103 "method": "bdev_nvme_set_options", 00:24:03.103 "params": { 00:24:03.103 "action_on_timeout": "none", 00:24:03.103 "timeout_us": 0, 00:24:03.103 "timeout_admin_us": 0, 00:24:03.103 "keep_alive_timeout_ms": 10000, 00:24:03.103 "arbitration_burst": 0, 00:24:03.103 "low_priority_weight": 0, 00:24:03.103 "medium_priority_weight": 0, 00:24:03.103 "high_priority_weight": 0, 00:24:03.103 "nvme_adminq_poll_period_us": 10000, 00:24:03.103 "nvme_ioq_poll_period_us": 0, 00:24:03.103 "io_queue_requests": 0, 00:24:03.103 "delay_cmd_submit": true, 00:24:03.103 "transport_retry_count": 4, 00:24:03.103 "bdev_retry_count": 3, 00:24:03.103 "transport_ack_timeout": 0, 00:24:03.103 "ctrlr_loss_timeout_sec": 0, 00:24:03.103 "reconnect_delay_sec": 0, 00:24:03.103 "fast_io_fail_timeout_sec": 0, 00:24:03.103 "disable_auto_failback": false, 00:24:03.103 "generate_uuids": false, 00:24:03.103 "transport_tos": 0, 00:24:03.103 "nvme_error_stat": false, 00:24:03.103 "rdma_srq_size": 0, 00:24:03.103 "io_path_stat": false, 00:24:03.103 "allow_accel_sequence": false, 00:24:03.103 "rdma_max_cq_size": 0, 00:24:03.103 "rdma_cm_event_timeout_ms": 0, 00:24:03.103 "dhchap_digests": [ 00:24:03.103 "sha256", 00:24:03.103 "sha384", 00:24:03.103 "sha512" 00:24:03.103 ], 00:24:03.103 "dhchap_dhgroups": [ 00:24:03.103 "null", 00:24:03.103 "ffdhe2048", 00:24:03.103 "ffdhe3072", 00:24:03.103 "ffdhe4096", 00:24:03.103 "ffdhe6144", 00:24:03.103 "ffdhe8192" 00:24:03.103 ] 00:24:03.103 } 00:24:03.103 }, 00:24:03.103 { 00:24:03.103 "method": "bdev_nvme_set_hotplug", 00:24:03.103 "params": { 00:24:03.103 "period_us": 100000, 00:24:03.103 "enable": false 00:24:03.103 } 00:24:03.103 }, 00:24:03.103 { 00:24:03.103 "method": "bdev_malloc_create", 00:24:03.103 "params": { 00:24:03.103 "name": "malloc0", 00:24:03.103 "num_blocks": 8192, 00:24:03.103 "block_size": 4096, 00:24:03.103 "physical_block_size": 4096, 00:24:03.103 "uuid": "1400b4a2-0f35-434d-a340-5ab4cb7bd281", 00:24:03.103 "optimal_io_boundary": 0 00:24:03.103 } 00:24:03.103 }, 00:24:03.103 { 00:24:03.103 "method": "bdev_wait_for_examine" 00:24:03.103 } 00:24:03.103 ] 00:24:03.103 }, 00:24:03.103 { 00:24:03.103 "subsystem": "nbd", 00:24:03.103 "config": [] 00:24:03.103 }, 00:24:03.103 { 00:24:03.103 "subsystem": "scheduler", 00:24:03.103 "config": [ 00:24:03.103 { 00:24:03.103 "method": "framework_set_scheduler", 00:24:03.103 "params": { 00:24:03.103 "name": "static" 00:24:03.103 } 00:24:03.103 } 00:24:03.103 ] 00:24:03.103 }, 00:24:03.103 { 00:24:03.103 "subsystem": "nvmf", 00:24:03.103 "config": [ 00:24:03.103 { 00:24:03.103 "method": "nvmf_set_config", 00:24:03.103 "params": { 00:24:03.103 "discovery_filter": "match_any", 00:24:03.103 "admin_cmd_passthru": { 00:24:03.103 "identify_ctrlr": false 00:24:03.103 } 00:24:03.103 } 00:24:03.103 }, 00:24:03.103 { 00:24:03.103 "method": "nvmf_set_max_subsystems", 00:24:03.103 "params": { 00:24:03.103 "max_subsystems": 1024 00:24:03.103 } 00:24:03.103 }, 00:24:03.103 { 00:24:03.103 "method": "nvmf_set_crdt", 00:24:03.103 "params": { 00:24:03.103 "crdt1": 0, 00:24:03.103 "crdt2": 0, 00:24:03.103 "crdt3": 0 00:24:03.103 } 00:24:03.103 }, 00:24:03.103 { 00:24:03.104 "method": "nvmf_create_transport", 00:24:03.104 "params": { 00:24:03.104 "trtype": "TCP", 00:24:03.104 "max_queue_depth": 128, 00:24:03.104 "max_io_qpairs_per_ctrlr": 127, 00:24:03.104 "in_capsule_data_size": 4096, 00:24:03.104 "max_io_size": 131072, 00:24:03.104 "io_unit_size": 131072, 00:24:03.104 "max_aq_depth": 128, 00:24:03.104 "num_shared_buffers": 511, 00:24:03.104 "buf_cache_size": 4294967295, 00:24:03.104 "dif_insert_or_strip": false, 00:24:03.104 "zcopy": false, 00:24:03.104 "c2h_success": false, 00:24:03.104 "sock_priority": 0, 00:24:03.104 "abort_timeout_sec": 1, 00:24:03.104 "ack_timeout": 0, 00:24:03.104 "data_wr_pool_size": 0 00:24:03.104 } 00:24:03.104 }, 00:24:03.104 { 00:24:03.104 "method": "nvmf_create_subsystem", 00:24:03.104 "params": { 00:24:03.104 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.104 "allow_any_host": false, 00:24:03.104 "serial_number": "SPDK00000000000001", 00:24:03.104 "model_number": "SPDK bdev Controller", 00:24:03.104 "max_namespaces": 10, 00:24:03.104 "min_cntlid": 1, 00:24:03.104 "max_cntlid": 65519, 00:24:03.104 "ana_reporting": false 00:24:03.104 } 00:24:03.104 }, 00:24:03.104 { 00:24:03.104 "method": "nvmf_subsystem_add_host", 00:24:03.104 "params": { 00:24:03.104 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.104 "host": "nqn.2016-06.io.spdk:host1", 00:24:03.104 "psk": "/tmp/tmp.wNKYQGCZLl" 00:24:03.104 } 00:24:03.104 }, 00:24:03.104 { 00:24:03.104 "method": "nvmf_subsystem_add_ns", 00:24:03.104 "params": { 00:24:03.104 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.104 "namespace": { 00:24:03.104 "nsid": 1, 00:24:03.104 "bdev_name": "malloc0", 00:24:03.104 "nguid": "1400B4A20F35434DA3405AB4CB7BD281", 00:24:03.104 "uuid": "1400b4a2-0f35-434d-a340-5ab4cb7bd281", 00:24:03.104 "no_auto_visible": false 00:24:03.104 } 00:24:03.104 } 00:24:03.104 }, 00:24:03.104 { 00:24:03.104 "method": "nvmf_subsystem_add_listener", 00:24:03.104 "params": { 00:24:03.104 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.104 "listen_address": { 00:24:03.104 "trtype": "TCP", 00:24:03.104 "adrfam": "IPv4", 00:24:03.104 "traddr": "10.0.0.2", 00:24:03.104 "trsvcid": "4420" 00:24:03.104 }, 00:24:03.104 "secure_channel": true 00:24:03.104 } 00:24:03.104 } 00:24:03.104 ] 00:24:03.104 } 00:24:03.104 ] 00:24:03.104 }' 00:24:03.104 14:26:12 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:03.362 14:26:12 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:24:03.362 "subsystems": [ 00:24:03.362 { 00:24:03.362 "subsystem": "keyring", 00:24:03.362 "config": [] 00:24:03.362 }, 00:24:03.362 { 00:24:03.362 "subsystem": "iobuf", 00:24:03.362 "config": [ 00:24:03.362 { 00:24:03.362 "method": "iobuf_set_options", 00:24:03.362 "params": { 00:24:03.362 "small_pool_count": 8192, 00:24:03.362 "large_pool_count": 1024, 00:24:03.362 "small_bufsize": 8192, 00:24:03.362 "large_bufsize": 135168 00:24:03.362 } 00:24:03.362 } 00:24:03.362 ] 00:24:03.362 }, 00:24:03.362 { 00:24:03.362 "subsystem": "sock", 00:24:03.362 "config": [ 00:24:03.362 { 00:24:03.362 "method": "sock_set_default_impl", 00:24:03.362 "params": { 00:24:03.362 "impl_name": "posix" 00:24:03.362 } 00:24:03.362 }, 00:24:03.362 { 00:24:03.362 "method": "sock_impl_set_options", 00:24:03.362 "params": { 00:24:03.362 "impl_name": "ssl", 00:24:03.362 "recv_buf_size": 4096, 00:24:03.362 "send_buf_size": 4096, 00:24:03.362 "enable_recv_pipe": true, 00:24:03.362 "enable_quickack": false, 00:24:03.362 "enable_placement_id": 0, 00:24:03.362 "enable_zerocopy_send_server": true, 00:24:03.362 "enable_zerocopy_send_client": false, 00:24:03.362 "zerocopy_threshold": 0, 00:24:03.362 "tls_version": 0, 00:24:03.362 "enable_ktls": false 00:24:03.362 } 00:24:03.362 }, 00:24:03.362 { 00:24:03.362 "method": "sock_impl_set_options", 00:24:03.362 "params": { 00:24:03.362 "impl_name": "posix", 00:24:03.362 "recv_buf_size": 2097152, 00:24:03.362 "send_buf_size": 2097152, 00:24:03.362 "enable_recv_pipe": true, 00:24:03.362 "enable_quickack": false, 00:24:03.362 "enable_placement_id": 0, 00:24:03.362 "enable_zerocopy_send_server": true, 00:24:03.362 "enable_zerocopy_send_client": false, 00:24:03.362 "zerocopy_threshold": 0, 00:24:03.362 "tls_version": 0, 00:24:03.362 "enable_ktls": false 00:24:03.362 } 00:24:03.362 } 00:24:03.362 ] 00:24:03.362 }, 00:24:03.362 { 00:24:03.362 "subsystem": "vmd", 00:24:03.362 "config": [] 00:24:03.362 }, 00:24:03.362 { 00:24:03.362 "subsystem": "accel", 00:24:03.362 "config": [ 00:24:03.362 { 00:24:03.362 "method": "accel_set_options", 00:24:03.362 "params": { 00:24:03.362 "small_cache_size": 128, 00:24:03.362 "large_cache_size": 16, 00:24:03.362 "task_count": 2048, 00:24:03.362 "sequence_count": 2048, 00:24:03.362 "buf_count": 2048 00:24:03.362 } 00:24:03.362 } 00:24:03.362 ] 00:24:03.362 }, 00:24:03.362 { 00:24:03.362 "subsystem": "bdev", 00:24:03.362 "config": [ 00:24:03.362 { 00:24:03.362 "method": "bdev_set_options", 00:24:03.362 "params": { 00:24:03.362 "bdev_io_pool_size": 65535, 00:24:03.362 "bdev_io_cache_size": 256, 00:24:03.362 "bdev_auto_examine": true, 00:24:03.362 "iobuf_small_cache_size": 128, 00:24:03.362 "iobuf_large_cache_size": 16 00:24:03.362 } 00:24:03.362 }, 00:24:03.362 { 00:24:03.362 "method": "bdev_raid_set_options", 00:24:03.362 "params": { 00:24:03.362 "process_window_size_kb": 1024 00:24:03.362 } 00:24:03.362 }, 00:24:03.362 { 00:24:03.362 "method": "bdev_iscsi_set_options", 00:24:03.362 "params": { 00:24:03.362 "timeout_sec": 30 00:24:03.362 } 00:24:03.362 }, 00:24:03.362 { 00:24:03.362 "method": "bdev_nvme_set_options", 00:24:03.362 "params": { 00:24:03.362 "action_on_timeout": "none", 00:24:03.362 "timeout_us": 0, 00:24:03.362 "timeout_admin_us": 0, 00:24:03.362 "keep_alive_timeout_ms": 10000, 00:24:03.362 "arbitration_burst": 0, 00:24:03.363 "low_priority_weight": 0, 00:24:03.363 "medium_priority_weight": 0, 00:24:03.363 "high_priority_weight": 0, 00:24:03.363 "nvme_adminq_poll_period_us": 10000, 00:24:03.363 "nvme_ioq_poll_period_us": 0, 00:24:03.363 "io_queue_requests": 512, 00:24:03.363 "delay_cmd_submit": true, 00:24:03.363 "transport_retry_count": 4, 00:24:03.363 "bdev_retry_count": 3, 00:24:03.363 "transport_ack_timeout": 0, 00:24:03.363 "ctrlr_loss_timeout_sec": 0, 00:24:03.363 "reconnect_delay_sec": 0, 00:24:03.363 "fast_io_fail_timeout_sec": 0, 00:24:03.363 "disable_auto_failback": false, 00:24:03.363 "generate_uuids": false, 00:24:03.363 "transport_tos": 0, 00:24:03.363 "nvme_error_stat": false, 00:24:03.363 "rdma_srq_size": 0, 00:24:03.363 "io_path_stat": false, 00:24:03.363 "allow_accel_sequence": false, 00:24:03.363 "rdma_max_cq_size": 0, 00:24:03.363 "rdma_cm_event_timeout_ms": 0, 00:24:03.363 "dhchap_digests": [ 00:24:03.363 "sha256", 00:24:03.363 "sha384", 00:24:03.363 "sha512" 00:24:03.363 ], 00:24:03.363 "dhchap_dhgroups": [ 00:24:03.363 "null", 00:24:03.363 "ffdhe2048", 00:24:03.363 "ffdhe3072", 00:24:03.363 "ffdhe4096", 00:24:03.363 "ffdhe6144", 00:24:03.363 "ffdhe8192" 00:24:03.363 ] 00:24:03.363 } 00:24:03.363 }, 00:24:03.363 { 00:24:03.363 "method": "bdev_nvme_attach_controller", 00:24:03.363 "params": { 00:24:03.363 "name": "TLSTEST", 00:24:03.363 "trtype": "TCP", 00:24:03.363 "adrfam": "IPv4", 00:24:03.363 "traddr": "10.0.0.2", 00:24:03.363 "trsvcid": "4420", 00:24:03.363 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.363 "prchk_reftag": false, 00:24:03.363 "prchk_guard": false, 00:24:03.363 "ctrlr_loss_timeout_sec": 0, 00:24:03.363 "reconnect_delay_sec": 0, 00:24:03.363 "fast_io_fail_timeout_sec": 0, 00:24:03.363 "psk": "/tmp/tmp.wNKYQGCZLl", 00:24:03.363 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:03.363 "hdgst": false, 00:24:03.363 "ddgst": false 00:24:03.363 } 00:24:03.363 }, 00:24:03.363 { 00:24:03.363 "method": "bdev_nvme_set_hotplug", 00:24:03.363 "params": { 00:24:03.363 "period_us": 100000, 00:24:03.363 "enable": false 00:24:03.363 } 00:24:03.363 }, 00:24:03.363 { 00:24:03.363 "method": "bdev_wait_for_examine" 00:24:03.363 } 00:24:03.363 ] 00:24:03.363 }, 00:24:03.363 { 00:24:03.363 "subsystem": "nbd", 00:24:03.363 "config": [] 00:24:03.363 } 00:24:03.363 ] 00:24:03.363 }' 00:24:03.363 14:26:12 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1426539 00:24:03.363 14:26:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1426539 ']' 00:24:03.363 14:26:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1426539 00:24:03.363 14:26:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:03.363 14:26:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:03.363 14:26:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1426539 00:24:03.363 14:26:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:03.363 14:26:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:03.363 14:26:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1426539' 00:24:03.363 killing process with pid 1426539 00:24:03.363 14:26:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1426539 00:24:03.363 Received shutdown signal, test time was about 10.000000 seconds 00:24:03.363 00:24:03.363 Latency(us) 00:24:03.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.363 =================================================================================================================== 00:24:03.363 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:03.363 14:26:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1426539 00:24:03.363 [2024-07-10 14:26:12.786634] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:04.297 14:26:13 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1426242 00:24:04.297 14:26:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1426242 ']' 00:24:04.297 14:26:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1426242 00:24:04.297 14:26:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:04.297 14:26:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:04.297 14:26:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1426242 00:24:04.555 14:26:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:04.555 14:26:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:04.555 14:26:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1426242' 00:24:04.555 killing process with pid 1426242 00:24:04.555 14:26:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1426242 00:24:04.555 [2024-07-10 14:26:13.801986] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:04.555 14:26:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1426242 00:24:05.928 14:26:15 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:05.928 14:26:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:05.928 14:26:15 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:24:05.928 "subsystems": [ 00:24:05.928 { 00:24:05.928 "subsystem": "keyring", 00:24:05.928 "config": [] 00:24:05.928 }, 00:24:05.928 { 00:24:05.928 "subsystem": "iobuf", 00:24:05.928 "config": [ 00:24:05.928 { 00:24:05.928 "method": "iobuf_set_options", 00:24:05.928 "params": { 00:24:05.928 "small_pool_count": 8192, 00:24:05.928 "large_pool_count": 1024, 00:24:05.928 "small_bufsize": 8192, 00:24:05.928 "large_bufsize": 135168 00:24:05.928 } 00:24:05.928 } 00:24:05.928 ] 00:24:05.928 }, 00:24:05.928 { 00:24:05.928 "subsystem": "sock", 00:24:05.928 "config": [ 00:24:05.928 { 00:24:05.928 "method": "sock_set_default_impl", 00:24:05.928 "params": { 00:24:05.928 "impl_name": "posix" 00:24:05.928 } 00:24:05.928 }, 00:24:05.928 { 00:24:05.928 "method": "sock_impl_set_options", 00:24:05.928 "params": { 00:24:05.928 "impl_name": "ssl", 00:24:05.928 "recv_buf_size": 4096, 00:24:05.928 "send_buf_size": 4096, 00:24:05.928 "enable_recv_pipe": true, 00:24:05.928 "enable_quickack": false, 00:24:05.928 "enable_placement_id": 0, 00:24:05.928 "enable_zerocopy_send_server": true, 00:24:05.928 "enable_zerocopy_send_client": false, 00:24:05.928 "zerocopy_threshold": 0, 00:24:05.928 "tls_version": 0, 00:24:05.928 "enable_ktls": false 00:24:05.928 } 00:24:05.928 }, 00:24:05.928 { 00:24:05.928 "method": "sock_impl_set_options", 00:24:05.928 "params": { 00:24:05.928 "impl_name": "posix", 00:24:05.928 "recv_buf_size": 2097152, 00:24:05.928 "send_buf_size": 2097152, 00:24:05.928 "enable_recv_pipe": true, 00:24:05.928 "enable_quickack": false, 00:24:05.928 "enable_placement_id": 0, 00:24:05.929 "enable_zerocopy_send_server": true, 00:24:05.929 "enable_zerocopy_send_client": false, 00:24:05.929 "zerocopy_threshold": 0, 00:24:05.929 "tls_version": 0, 00:24:05.929 "enable_ktls": false 00:24:05.929 } 00:24:05.929 } 00:24:05.929 ] 00:24:05.929 }, 00:24:05.929 { 00:24:05.929 "subsystem": "vmd", 00:24:05.929 "config": [] 00:24:05.929 }, 00:24:05.929 { 00:24:05.929 "subsystem": "accel", 00:24:05.929 "config": [ 00:24:05.929 { 00:24:05.929 "method": "accel_set_options", 00:24:05.929 "params": { 00:24:05.929 "small_cache_size": 128, 00:24:05.929 "large_cache_size": 16, 00:24:05.929 "task_count": 2048, 00:24:05.929 "sequence_count": 2048, 00:24:05.929 "buf_count": 2048 00:24:05.929 } 00:24:05.929 } 00:24:05.929 ] 00:24:05.929 }, 00:24:05.929 { 00:24:05.929 "subsystem": "bdev", 00:24:05.929 "config": [ 00:24:05.929 { 00:24:05.929 "method": "bdev_set_options", 00:24:05.929 "params": { 00:24:05.929 "bdev_io_pool_size": 65535, 00:24:05.929 "bdev_io_cache_size": 256, 00:24:05.929 "bdev_auto_examine": true, 00:24:05.929 "iobuf_small_cache_size": 128, 00:24:05.929 "iobuf_large_cache_size": 16 00:24:05.929 } 00:24:05.929 }, 00:24:05.929 { 00:24:05.929 "method": "bdev_raid_set_options", 00:24:05.929 "params": { 00:24:05.929 "process_window_size_kb": 1024 00:24:05.929 } 00:24:05.929 }, 00:24:05.929 { 00:24:05.929 "method": "bdev_iscsi_set_options", 00:24:05.929 "params": { 00:24:05.929 "timeout_sec": 30 00:24:05.929 } 00:24:05.929 }, 00:24:05.929 { 00:24:05.929 "method": "bdev_nvme_set_options", 00:24:05.929 "params": { 00:24:05.929 "action_on_timeout": "none", 00:24:05.929 "timeout_us": 0, 00:24:05.929 "timeout_admin_us": 0, 00:24:05.929 "keep_alive_timeout_ms": 10000, 00:24:05.929 "arbitration_burst": 0, 00:24:05.929 "low_priority_weight": 0, 00:24:05.929 "medium_priority_weight": 0, 00:24:05.929 "high_priority_weight": 0, 00:24:05.929 "nvme_adminq_poll_period_us": 10000, 00:24:05.929 "nvme_ioq_poll_period_us": 0, 00:24:05.929 "io_queue_requests": 0, 00:24:05.929 "delay_cmd_submit": true, 00:24:05.929 "transport_retry_count": 4, 00:24:05.929 "bdev_retry_count": 3, 00:24:05.929 "transport_ack_timeout": 0, 00:24:05.929 "ctrlr_loss_timeout_sec": 0, 00:24:05.929 "reconnect_delay_sec": 0, 00:24:05.929 "fast_io_fail_timeout_sec": 0, 00:24:05.929 "disable_auto_failback": false, 00:24:05.929 "generate_uuids": false, 00:24:05.929 "transport_tos": 0, 00:24:05.929 "nvme_error_stat": false, 00:24:05.929 "rdma_srq_size": 0, 00:24:05.929 "io_path_stat": false, 00:24:05.929 "allow_accel_sequence": false, 00:24:05.929 "rdma_max_cq_size": 0, 00:24:05.929 "rdma_cm_event_timeout_ms": 0, 00:24:05.929 "dhchap_digests": [ 00:24:05.929 "sha256", 00:24:05.929 "sha384", 00:24:05.929 "sha512" 00:24:05.929 ], 00:24:05.929 "dhchap_dhgroups": [ 00:24:05.929 "null", 00:24:05.929 "ffdhe2048", 00:24:05.929 "ffdhe3072", 00:24:05.929 "ffdhe4096", 00:24:05.929 "ffdhe6144", 00:24:05.929 "ffdhe8192" 00:24:05.929 ] 00:24:05.929 } 00:24:05.929 }, 00:24:05.929 { 00:24:05.929 "method": "bdev_nvme_set_hotplug", 00:24:05.929 "params": { 00:24:05.929 "period_us": 100000, 00:24:05.929 "enable": false 00:24:05.929 } 00:24:05.929 }, 00:24:05.929 { 00:24:05.929 "method": "bdev_malloc_create", 00:24:05.929 "params": { 00:24:05.929 "name": "malloc0", 00:24:05.929 "num_blocks": 8192, 00:24:05.929 "block_size": 4096, 00:24:05.929 "physical_block_size": 4096, 00:24:05.929 "uuid": "1400b4a2-0f35-434d-a340-5ab4cb7bd281", 00:24:05.929 "optimal_io_boundary": 0 00:24:05.929 } 00:24:05.929 }, 00:24:05.929 { 00:24:05.929 "method": "bdev_wait_for_examine" 00:24:05.929 } 00:24:05.929 ] 00:24:05.929 }, 00:24:05.929 { 00:24:05.929 "subsystem": "nbd", 00:24:05.929 "config": [] 00:24:05.929 }, 00:24:05.929 { 00:24:05.929 "subsystem": "scheduler", 00:24:05.929 "config": [ 00:24:05.929 { 00:24:05.929 "method": "framework_set_scheduler", 00:24:05.929 "params": { 00:24:05.929 "name": "static" 00:24:05.929 } 00:24:05.929 } 00:24:05.929 ] 00:24:05.929 }, 00:24:05.929 { 00:24:05.929 "subsystem": "nvmf", 00:24:05.929 "config": [ 00:24:05.929 { 00:24:05.929 "method": "nvmf_set_config", 00:24:05.929 "params": { 00:24:05.929 "discovery_filter": "match_any", 00:24:05.929 "admin_cmd_passthru": { 00:24:05.929 "identify_ctrlr": false 00:24:05.929 } 00:24:05.929 } 00:24:05.929 }, 00:24:05.929 { 00:24:05.929 "method": "nvmf_set_max_subsystems", 00:24:05.929 "params": { 00:24:05.929 "max_subsystems": 1024 00:24:05.929 } 00:24:05.929 }, 00:24:05.929 { 00:24:05.929 "method": "nvmf_set_crdt", 00:24:05.929 "params": { 00:24:05.929 "crdt1": 0, 00:24:05.929 "crdt2": 0, 00:24:05.929 "crdt3": 0 00:24:05.929 } 00:24:05.929 }, 00:24:05.929 { 00:24:05.929 "method": "nvmf_create_transport", 00:24:05.929 "params": { 00:24:05.929 "trtype": "TCP", 00:24:05.929 "max_queue_depth": 128, 00:24:05.929 "max_io_qpairs_per_ctrlr": 127, 00:24:05.929 "in_capsule_data_size": 4096, 00:24:05.929 "max_io_size": 131072, 00:24:05.929 "io_unit_size": 131072, 00:24:05.929 "max_aq_depth": 128, 00:24:05.929 "num_shared_buffers": 511, 00:24:05.929 "buf_cache_size": 4294967295, 00:24:05.929 "dif_insert_or_strip": false, 00:24:05.929 "zcopy": false, 00:24:05.929 "c2h_success": false, 00:24:05.929 "sock_priority": 0, 00:24:05.929 "abort_timeout_sec": 1, 00:24:05.929 "ack_timeout": 0, 00:24:05.929 "data_wr_pool_size": 0 00:24:05.929 } 00:24:05.929 }, 00:24:05.929 { 00:24:05.929 "method": "nvmf_create_subsystem", 00:24:05.929 "params": { 00:24:05.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.929 "allow_any_host": false, 00:24:05.929 "serial_number": "SPDK00000000000001", 00:24:05.929 "model_number": "SPDK bdev Controller", 00:24:05.929 "max_namespaces": 10, 00:24:05.929 "min_cntlid": 1, 00:24:05.929 "max_cntlid": 65519, 00:24:05.929 "ana_reporting": false 00:24:05.929 } 00:24:05.929 }, 00:24:05.929 { 00:24:05.929 "method": "nvmf_subsystem_add_host", 00:24:05.929 "params": { 00:24:05.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.929 "host": "nqn.2016-06.io.spdk:host1", 00:24:05.929 "psk": "/tmp/tmp.wNKYQGCZLl" 00:24:05.929 } 00:24:05.929 }, 00:24:05.929 { 00:24:05.929 "method": "nvmf_subsystem_add_ns", 00:24:05.929 "params": { 00:24:05.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.929 "namespace": { 00:24:05.929 "nsid": 1, 00:24:05.929 "bdev_name": "malloc0", 00:24:05.929 "nguid": "1400B4A20F35434DA3405AB4CB7BD281", 00:24:05.929 "uuid": "1400b4a2-0f35-434d-a340-5ab4cb7bd281", 00:24:05.929 "no_auto_visible": false 00:24:05.929 } 00:24:05.929 } 00:24:05.929 }, 00:24:05.929 { 00:24:05.929 "method": "nvmf_subsystem_add_listener", 00:24:05.929 "params": { 00:24:05.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.929 "listen_address": { 00:24:05.929 "trtype": "TCP", 00:24:05.929 "adrfam": "IPv4", 00:24:05.929 "traddr": "10.0.0.2", 00:24:05.929 "trsvcid": "4420" 00:24:05.929 }, 00:24:05.929 "secure_channel": true 00:24:05.929 } 00:24:05.929 } 00:24:05.929 ] 00:24:05.929 } 00:24:05.929 ] 00:24:05.929 }' 00:24:05.929 14:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:05.929 14:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:05.929 14:26:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1427083 00:24:05.929 14:26:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:05.929 14:26:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1427083 00:24:05.929 14:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1427083 ']' 00:24:05.929 14:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.929 14:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.929 14:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.929 14:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.929 14:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:05.929 [2024-07-10 14:26:15.317845] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:24:05.929 [2024-07-10 14:26:15.317999] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.929 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.187 [2024-07-10 14:26:15.455602] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.445 [2024-07-10 14:26:15.711990] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.445 [2024-07-10 14:26:15.712062] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.445 [2024-07-10 14:26:15.712090] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.445 [2024-07-10 14:26:15.712115] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.445 [2024-07-10 14:26:15.712137] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.445 [2024-07-10 14:26:15.712282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.011 [2024-07-10 14:26:16.251286] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.011 [2024-07-10 14:26:16.267254] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:07.011 [2024-07-10 14:26:16.283276] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:07.011 [2024-07-10 14:26:16.283574] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.011 14:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:07.011 14:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:07.011 14:26:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:07.011 14:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:07.011 14:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.011 14:26:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.011 14:26:16 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1427233 00:24:07.011 14:26:16 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1427233 /var/tmp/bdevperf.sock 00:24:07.011 14:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1427233 ']' 00:24:07.011 14:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:07.011 14:26:16 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:07.011 14:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:07.011 14:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:07.011 14:26:16 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:24:07.011 "subsystems": [ 00:24:07.011 { 00:24:07.011 "subsystem": "keyring", 00:24:07.011 "config": [] 00:24:07.011 }, 00:24:07.011 { 00:24:07.011 "subsystem": "iobuf", 00:24:07.011 "config": [ 00:24:07.011 { 00:24:07.011 "method": "iobuf_set_options", 00:24:07.011 "params": { 00:24:07.011 "small_pool_count": 8192, 00:24:07.011 "large_pool_count": 1024, 00:24:07.011 "small_bufsize": 8192, 00:24:07.011 "large_bufsize": 135168 00:24:07.011 } 00:24:07.011 } 00:24:07.011 ] 00:24:07.011 }, 00:24:07.011 { 00:24:07.011 "subsystem": "sock", 00:24:07.011 "config": [ 00:24:07.011 { 00:24:07.011 "method": "sock_set_default_impl", 00:24:07.011 "params": { 00:24:07.011 "impl_name": "posix" 00:24:07.011 } 00:24:07.011 }, 00:24:07.011 { 00:24:07.011 "method": "sock_impl_set_options", 00:24:07.012 "params": { 00:24:07.012 "impl_name": "ssl", 00:24:07.012 "recv_buf_size": 4096, 00:24:07.012 "send_buf_size": 4096, 00:24:07.012 "enable_recv_pipe": true, 00:24:07.012 "enable_quickack": false, 00:24:07.012 "enable_placement_id": 0, 00:24:07.012 "enable_zerocopy_send_server": true, 00:24:07.012 "enable_zerocopy_send_client": false, 00:24:07.012 "zerocopy_threshold": 0, 00:24:07.012 "tls_version": 0, 00:24:07.012 "enable_ktls": false 00:24:07.012 } 00:24:07.012 }, 00:24:07.012 { 00:24:07.012 "method": "sock_impl_set_options", 00:24:07.012 "params": { 00:24:07.012 "impl_name": "posix", 00:24:07.012 "recv_buf_size": 2097152, 00:24:07.012 "send_buf_size": 2097152, 00:24:07.012 "enable_recv_pipe": true, 00:24:07.012 "enable_quickack": false, 00:24:07.012 "enable_placement_id": 0, 00:24:07.012 "enable_zerocopy_send_server": true, 00:24:07.012 "enable_zerocopy_send_client": false, 00:24:07.012 "zerocopy_threshold": 0, 00:24:07.012 "tls_version": 0, 00:24:07.012 "enable_ktls": false 00:24:07.012 } 00:24:07.012 } 00:24:07.012 ] 00:24:07.012 }, 00:24:07.012 { 00:24:07.012 "subsystem": "vmd", 00:24:07.012 "config": [] 00:24:07.012 }, 00:24:07.012 { 00:24:07.012 "subsystem": "accel", 00:24:07.012 "config": [ 00:24:07.012 { 00:24:07.012 "method": "accel_set_options", 00:24:07.012 "params": { 00:24:07.012 "small_cache_size": 128, 00:24:07.012 "large_cache_size": 16, 00:24:07.012 "task_count": 2048, 00:24:07.012 "sequence_count": 2048, 00:24:07.012 "buf_count": 2048 00:24:07.012 } 00:24:07.012 } 00:24:07.012 ] 00:24:07.012 }, 00:24:07.012 { 00:24:07.012 "subsystem": "bdev", 00:24:07.012 "config": [ 00:24:07.012 { 00:24:07.012 "method": "bdev_set_options", 00:24:07.012 "params": { 00:24:07.012 "bdev_io_pool_size": 65535, 00:24:07.012 "bdev_io_cache_size": 256, 00:24:07.012 "bdev_auto_examine": true, 00:24:07.012 "iobuf_small_cache_size": 128, 00:24:07.012 "iobuf_large_cache_size": 16 00:24:07.012 } 00:24:07.012 }, 00:24:07.012 { 00:24:07.012 "method": "bdev_raid_set_options", 00:24:07.012 "params": { 00:24:07.012 "process_window_size_kb": 1024 00:24:07.012 } 00:24:07.012 }, 00:24:07.012 { 00:24:07.012 "method": "bdev_iscsi_set_options", 00:24:07.012 "params": { 00:24:07.012 "timeout_sec": 30 00:24:07.012 } 00:24:07.012 }, 00:24:07.012 { 00:24:07.012 "method": "bdev_nvme_set_options", 00:24:07.012 "params": { 00:24:07.012 "action_on_timeout": "none", 00:24:07.012 "timeout_us": 0, 00:24:07.012 "timeout_admin_us": 0, 00:24:07.012 "keep_alive_timeout_ms": 10000, 00:24:07.012 "arbitration_burst": 0, 00:24:07.012 "low_priority_weight": 0, 00:24:07.012 "medium_priority_weight": 0, 00:24:07.012 "high_priority_weight": 0, 00:24:07.012 "nvme_adminq_poll_period_us": 10000, 00:24:07.012 "nvme_ioq_poll_period_us": 0, 00:24:07.012 "io_queue_requests": 512, 00:24:07.012 "delay_cmd_submit": true, 00:24:07.012 "transport_retry_count": 4, 00:24:07.012 "bdev_retry_count": 3, 00:24:07.012 "transport_ack_timeout": 0, 00:24:07.012 "ctrlr_loss_timeout_sec": 0, 00:24:07.012 "reconnect_delay_sec": 0, 00:24:07.012 "fast_io_fail_timeout_sec": 0, 00:24:07.012 "disable_auto_failback": false, 00:24:07.012 "generate_uuids": false, 00:24:07.012 "transport_tos": 0, 00:24:07.012 "nvme_error_stat": false, 00:24:07.012 "rdma_srq_size": 0, 00:24:07.012 "io_path_stat": false, 00:24:07.012 "allow_accel_sequence": false, 00:24:07.012 "rdma_max_cq_size": 0, 00:24:07.012 "rdma_cm_event_timeout_ms": 0, 00:24:07.012 "dhchap_digests": [ 00:24:07.012 "sha256", 00:24:07.012 "sha384", 00:24:07.012 "sha512" 00:24:07.012 ], 00:24:07.012 "dhchap_dhgroups": [ 00:24:07.012 "null", 00:24:07.012 "ffdhe2048", 00:24:07.012 "ffdhe3072", 00:24:07.012 "ffdhe4096", 00:24:07.012 "ffdWaiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:07.012 he6144", 00:24:07.012 "ffdhe8192" 00:24:07.012 ] 00:24:07.012 } 00:24:07.012 }, 00:24:07.012 { 00:24:07.012 "method": "bdev_nvme_attach_controller", 00:24:07.012 "params": { 00:24:07.012 "name": "TLSTEST", 00:24:07.012 "trtype": "TCP", 00:24:07.012 "adrfam": "IPv4", 00:24:07.012 "traddr": "10.0.0.2", 00:24:07.012 "trsvcid": "4420", 00:24:07.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.012 "prchk_reftag": false, 00:24:07.012 "prchk_guard": false, 00:24:07.012 "ctrlr_loss_timeout_sec": 0, 00:24:07.012 "reconnect_delay_sec": 0, 00:24:07.012 "fast_io_fail_timeout_sec": 0, 00:24:07.012 "psk": "/tmp/tmp.wNKYQGCZLl", 00:24:07.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:07.012 "hdgst": false, 00:24:07.012 "ddgst": false 00:24:07.012 } 00:24:07.012 }, 00:24:07.012 { 00:24:07.012 "method": "bdev_nvme_set_hotplug", 00:24:07.012 "params": { 00:24:07.012 "period_us": 100000, 00:24:07.012 "enable": false 00:24:07.012 } 00:24:07.012 }, 00:24:07.012 { 00:24:07.012 "method": "bdev_wait_for_examine" 00:24:07.012 } 00:24:07.012 ] 00:24:07.012 }, 00:24:07.012 { 00:24:07.012 "subsystem": "nbd", 00:24:07.012 "config": [] 00:24:07.012 } 00:24:07.012 ] 00:24:07.012 }' 00:24:07.012 14:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:07.012 14:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.012 [2024-07-10 14:26:16.415313] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:24:07.012 [2024-07-10 14:26:16.415475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1427233 ] 00:24:07.012 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.270 [2024-07-10 14:26:16.535314] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.528 [2024-07-10 14:26:16.763865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.786 [2024-07-10 14:26:17.157612] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:07.787 [2024-07-10 14:26:17.157786] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:08.044 14:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:08.044 14:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:08.045 14:26:17 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:08.045 Running I/O for 10 seconds... 00:24:20.239 00:24:20.239 Latency(us) 00:24:20.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.239 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:20.239 Verification LBA range: start 0x0 length 0x2000 00:24:20.239 TLSTESTn1 : 10.05 2560.73 10.00 0.00 0.00 49845.22 8204.14 68739.98 00:24:20.239 =================================================================================================================== 00:24:20.239 Total : 2560.73 10.00 0.00 0.00 49845.22 8204.14 68739.98 00:24:20.239 0 00:24:20.239 14:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:20.239 14:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1427233 00:24:20.239 14:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1427233 ']' 00:24:20.239 14:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1427233 00:24:20.239 14:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:20.239 14:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:20.239 14:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1427233 00:24:20.239 14:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:20.239 14:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:20.239 14:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1427233' 00:24:20.239 killing process with pid 1427233 00:24:20.239 14:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1427233 00:24:20.239 Received shutdown signal, test time was about 10.000000 seconds 00:24:20.239 00:24:20.239 Latency(us) 00:24:20.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.239 =================================================================================================================== 00:24:20.239 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.239 [2024-07-10 14:26:27.608085] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:20.239 14:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1427233 00:24:20.239 14:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1427083 00:24:20.239 14:26:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1427083 ']' 00:24:20.239 14:26:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1427083 00:24:20.239 14:26:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:20.239 14:26:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:20.239 14:26:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1427083 00:24:20.239 14:26:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:20.239 14:26:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:20.239 14:26:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1427083' 00:24:20.239 killing process with pid 1427083 00:24:20.239 14:26:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1427083 00:24:20.239 14:26:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1427083 00:24:20.239 [2024-07-10 14:26:28.584174] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:20.803 14:26:30 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:24:20.803 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:20.803 14:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:20.803 14:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.803 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1428824 00:24:20.803 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:20.803 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1428824 00:24:20.803 14:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1428824 ']' 00:24:20.803 14:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.803 14:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:20.803 14:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.803 14:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:20.803 14:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.803 [2024-07-10 14:26:30.138009] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:24:20.803 [2024-07-10 14:26:30.138142] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.803 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.803 [2024-07-10 14:26:30.269614] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.060 [2024-07-10 14:26:30.520731] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.060 [2024-07-10 14:26:30.520803] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.060 [2024-07-10 14:26:30.520832] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.060 [2024-07-10 14:26:30.520857] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.060 [2024-07-10 14:26:30.520878] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.060 [2024-07-10 14:26:30.520929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.624 14:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.624 14:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:21.624 14:26:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:21.624 14:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:21.624 14:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.624 14:26:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.624 14:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.wNKYQGCZLl 00:24:21.624 14:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wNKYQGCZLl 00:24:21.624 14:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:21.882 [2024-07-10 14:26:31.284061] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.882 14:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:22.139 14:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:22.396 [2024-07-10 14:26:31.821629] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:22.396 [2024-07-10 14:26:31.821953] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.396 14:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:22.653 malloc0 00:24:22.653 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:22.910 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wNKYQGCZLl 00:24:23.167 [2024-07-10 14:26:32.590121] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:23.167 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1429110 00:24:23.167 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:23.167 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:23.167 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1429110 /var/tmp/bdevperf.sock 00:24:23.167 14:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1429110 ']' 00:24:23.167 14:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:23.167 14:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.167 14:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:23.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:23.167 14:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.167 14:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.424 [2024-07-10 14:26:32.684212] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:24:23.424 [2024-07-10 14:26:32.684361] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1429110 ] 00:24:23.424 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.424 [2024-07-10 14:26:32.812605] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.682 [2024-07-10 14:26:33.059026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.246 14:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.246 14:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:24.246 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wNKYQGCZLl 00:24:24.503 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:24.760 [2024-07-10 14:26:34.118321] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:24.760 nvme0n1 00:24:24.760 14:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:25.024 Running I/O for 1 seconds... 00:24:25.966 00:24:25.966 Latency(us) 00:24:25.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.966 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:25.966 Verification LBA range: start 0x0 length 0x2000 00:24:25.966 nvme0n1 : 1.02 1045.10 4.08 0.00 0.00 120997.07 5898.24 114955.00 00:24:25.966 =================================================================================================================== 00:24:25.966 Total : 1045.10 4.08 0.00 0.00 120997.07 5898.24 114955.00 00:24:25.966 0 00:24:25.966 14:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1429110 00:24:25.966 14:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1429110 ']' 00:24:25.966 14:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1429110 00:24:25.966 14:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:25.966 14:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:25.966 14:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1429110 00:24:25.966 14:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:25.966 14:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:25.966 14:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1429110' 00:24:25.966 killing process with pid 1429110 00:24:25.966 14:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1429110 00:24:25.966 Received shutdown signal, test time was about 1.000000 seconds 00:24:25.966 00:24:25.966 Latency(us) 00:24:25.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.966 =================================================================================================================== 00:24:25.966 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:25.966 14:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1429110 00:24:27.337 14:26:36 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1428824 00:24:27.337 14:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1428824 ']' 00:24:27.337 14:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1428824 00:24:27.337 14:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:27.337 14:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:27.337 14:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1428824 00:24:27.337 14:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:27.337 14:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:27.338 14:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1428824' 00:24:27.338 killing process with pid 1428824 00:24:27.338 14:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1428824 00:24:27.338 [2024-07-10 14:26:36.493865] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:27.338 14:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1428824 00:24:28.708 14:26:37 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:24:28.708 14:26:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:28.708 14:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:28.708 14:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.708 14:26:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1429785 00:24:28.708 14:26:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:28.708 14:26:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1429785 00:24:28.708 14:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1429785 ']' 00:24:28.708 14:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.708 14:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:28.708 14:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.708 14:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:28.708 14:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.708 [2024-07-10 14:26:37.924731] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:24:28.708 [2024-07-10 14:26:37.924900] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.708 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.708 [2024-07-10 14:26:38.055169] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.965 [2024-07-10 14:26:38.306996] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.965 [2024-07-10 14:26:38.307080] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.965 [2024-07-10 14:26:38.307107] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.965 [2024-07-10 14:26:38.307132] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.965 [2024-07-10 14:26:38.307154] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.965 [2024-07-10 14:26:38.307211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.529 14:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:29.529 14:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:29.529 14:26:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:29.529 14:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:29.529 14:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:29.529 14:26:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.529 14:26:38 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:24:29.529 14:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.529 14:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:29.529 [2024-07-10 14:26:38.885192] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.529 malloc0 00:24:29.529 [2024-07-10 14:26:38.956878] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:29.529 [2024-07-10 14:26:38.957251] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.529 14:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.529 14:26:38 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1429933 00:24:29.530 14:26:38 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:29.530 14:26:38 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1429933 /var/tmp/bdevperf.sock 00:24:29.530 14:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1429933 ']' 00:24:29.530 14:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:29.530 14:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:29.530 14:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:29.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:29.530 14:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:29.530 14:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:29.787 [2024-07-10 14:26:39.063432] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:24:29.787 [2024-07-10 14:26:39.063574] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1429933 ] 00:24:29.787 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.787 [2024-07-10 14:26:39.195008] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.044 [2024-07-10 14:26:39.444048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.609 14:26:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:30.609 14:26:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:30.609 14:26:39 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wNKYQGCZLl 00:24:30.865 14:26:40 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:31.123 [2024-07-10 14:26:40.455052] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:31.123 nvme0n1 00:24:31.123 14:26:40 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:31.382 Running I/O for 1 seconds... 00:24:32.314 00:24:32.315 Latency(us) 00:24:32.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.315 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:32.315 Verification LBA range: start 0x0 length 0x2000 00:24:32.315 nvme0n1 : 1.05 2378.88 9.29 0.00 0.00 52570.38 8738.13 86604.61 00:24:32.315 =================================================================================================================== 00:24:32.315 Total : 2378.88 9.29 0.00 0.00 52570.38 8738.13 86604.61 00:24:32.315 0 00:24:32.315 14:26:41 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:24:32.315 14:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.315 14:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.572 14:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.572 14:26:41 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:24:32.572 "subsystems": [ 00:24:32.572 { 00:24:32.572 "subsystem": "keyring", 00:24:32.572 "config": [ 00:24:32.572 { 00:24:32.572 "method": "keyring_file_add_key", 00:24:32.572 "params": { 00:24:32.572 "name": "key0", 00:24:32.572 "path": "/tmp/tmp.wNKYQGCZLl" 00:24:32.572 } 00:24:32.572 } 00:24:32.572 ] 00:24:32.572 }, 00:24:32.572 { 00:24:32.572 "subsystem": "iobuf", 00:24:32.572 "config": [ 00:24:32.572 { 00:24:32.572 "method": "iobuf_set_options", 00:24:32.572 "params": { 00:24:32.572 "small_pool_count": 8192, 00:24:32.572 "large_pool_count": 1024, 00:24:32.572 "small_bufsize": 8192, 00:24:32.572 "large_bufsize": 135168 00:24:32.572 } 00:24:32.572 } 00:24:32.572 ] 00:24:32.572 }, 00:24:32.572 { 00:24:32.572 "subsystem": "sock", 00:24:32.572 "config": [ 00:24:32.572 { 00:24:32.572 "method": "sock_set_default_impl", 00:24:32.572 "params": { 00:24:32.572 "impl_name": "posix" 00:24:32.572 } 00:24:32.572 }, 00:24:32.572 { 00:24:32.572 "method": "sock_impl_set_options", 00:24:32.572 "params": { 00:24:32.572 "impl_name": "ssl", 00:24:32.572 "recv_buf_size": 4096, 00:24:32.572 "send_buf_size": 4096, 00:24:32.572 "enable_recv_pipe": true, 00:24:32.572 "enable_quickack": false, 00:24:32.572 "enable_placement_id": 0, 00:24:32.572 "enable_zerocopy_send_server": true, 00:24:32.572 "enable_zerocopy_send_client": false, 00:24:32.572 "zerocopy_threshold": 0, 00:24:32.572 "tls_version": 0, 00:24:32.572 "enable_ktls": false 00:24:32.572 } 00:24:32.572 }, 00:24:32.572 { 00:24:32.572 "method": "sock_impl_set_options", 00:24:32.572 "params": { 00:24:32.572 "impl_name": "posix", 00:24:32.572 "recv_buf_size": 2097152, 00:24:32.572 "send_buf_size": 2097152, 00:24:32.572 "enable_recv_pipe": true, 00:24:32.572 "enable_quickack": false, 00:24:32.572 "enable_placement_id": 0, 00:24:32.572 "enable_zerocopy_send_server": true, 00:24:32.572 "enable_zerocopy_send_client": false, 00:24:32.572 "zerocopy_threshold": 0, 00:24:32.572 "tls_version": 0, 00:24:32.572 "enable_ktls": false 00:24:32.572 } 00:24:32.572 } 00:24:32.572 ] 00:24:32.572 }, 00:24:32.572 { 00:24:32.572 "subsystem": "vmd", 00:24:32.572 "config": [] 00:24:32.572 }, 00:24:32.572 { 00:24:32.572 "subsystem": "accel", 00:24:32.572 "config": [ 00:24:32.572 { 00:24:32.572 "method": "accel_set_options", 00:24:32.572 "params": { 00:24:32.572 "small_cache_size": 128, 00:24:32.572 "large_cache_size": 16, 00:24:32.572 "task_count": 2048, 00:24:32.572 "sequence_count": 2048, 00:24:32.572 "buf_count": 2048 00:24:32.572 } 00:24:32.572 } 00:24:32.572 ] 00:24:32.572 }, 00:24:32.572 { 00:24:32.572 "subsystem": "bdev", 00:24:32.572 "config": [ 00:24:32.572 { 00:24:32.572 "method": "bdev_set_options", 00:24:32.572 "params": { 00:24:32.572 "bdev_io_pool_size": 65535, 00:24:32.572 "bdev_io_cache_size": 256, 00:24:32.572 "bdev_auto_examine": true, 00:24:32.572 "iobuf_small_cache_size": 128, 00:24:32.572 "iobuf_large_cache_size": 16 00:24:32.572 } 00:24:32.572 }, 00:24:32.572 { 00:24:32.572 "method": "bdev_raid_set_options", 00:24:32.572 "params": { 00:24:32.572 "process_window_size_kb": 1024 00:24:32.572 } 00:24:32.572 }, 00:24:32.572 { 00:24:32.572 "method": "bdev_iscsi_set_options", 00:24:32.572 "params": { 00:24:32.572 "timeout_sec": 30 00:24:32.572 } 00:24:32.572 }, 00:24:32.572 { 00:24:32.572 "method": "bdev_nvme_set_options", 00:24:32.572 "params": { 00:24:32.573 "action_on_timeout": "none", 00:24:32.573 "timeout_us": 0, 00:24:32.573 "timeout_admin_us": 0, 00:24:32.573 "keep_alive_timeout_ms": 10000, 00:24:32.573 "arbitration_burst": 0, 00:24:32.573 "low_priority_weight": 0, 00:24:32.573 "medium_priority_weight": 0, 00:24:32.573 "high_priority_weight": 0, 00:24:32.573 "nvme_adminq_poll_period_us": 10000, 00:24:32.573 "nvme_ioq_poll_period_us": 0, 00:24:32.573 "io_queue_requests": 0, 00:24:32.573 "delay_cmd_submit": true, 00:24:32.573 "transport_retry_count": 4, 00:24:32.573 "bdev_retry_count": 3, 00:24:32.573 "transport_ack_timeout": 0, 00:24:32.573 "ctrlr_loss_timeout_sec": 0, 00:24:32.573 "reconnect_delay_sec": 0, 00:24:32.573 "fast_io_fail_timeout_sec": 0, 00:24:32.573 "disable_auto_failback": false, 00:24:32.573 "generate_uuids": false, 00:24:32.573 "transport_tos": 0, 00:24:32.573 "nvme_error_stat": false, 00:24:32.573 "rdma_srq_size": 0, 00:24:32.573 "io_path_stat": false, 00:24:32.573 "allow_accel_sequence": false, 00:24:32.573 "rdma_max_cq_size": 0, 00:24:32.573 "rdma_cm_event_timeout_ms": 0, 00:24:32.573 "dhchap_digests": [ 00:24:32.573 "sha256", 00:24:32.573 "sha384", 00:24:32.573 "sha512" 00:24:32.573 ], 00:24:32.573 "dhchap_dhgroups": [ 00:24:32.573 "null", 00:24:32.573 "ffdhe2048", 00:24:32.573 "ffdhe3072", 00:24:32.573 "ffdhe4096", 00:24:32.573 "ffdhe6144", 00:24:32.573 "ffdhe8192" 00:24:32.573 ] 00:24:32.573 } 00:24:32.573 }, 00:24:32.573 { 00:24:32.573 "method": "bdev_nvme_set_hotplug", 00:24:32.573 "params": { 00:24:32.573 "period_us": 100000, 00:24:32.573 "enable": false 00:24:32.573 } 00:24:32.573 }, 00:24:32.573 { 00:24:32.573 "method": "bdev_malloc_create", 00:24:32.573 "params": { 00:24:32.573 "name": "malloc0", 00:24:32.573 "num_blocks": 8192, 00:24:32.573 "block_size": 4096, 00:24:32.573 "physical_block_size": 4096, 00:24:32.573 "uuid": "85d0090b-c2c0-4a42-8bc1-9e649fd13b7b", 00:24:32.573 "optimal_io_boundary": 0 00:24:32.573 } 00:24:32.573 }, 00:24:32.573 { 00:24:32.573 "method": "bdev_wait_for_examine" 00:24:32.573 } 00:24:32.573 ] 00:24:32.573 }, 00:24:32.573 { 00:24:32.573 "subsystem": "nbd", 00:24:32.573 "config": [] 00:24:32.573 }, 00:24:32.573 { 00:24:32.573 "subsystem": "scheduler", 00:24:32.573 "config": [ 00:24:32.573 { 00:24:32.573 "method": "framework_set_scheduler", 00:24:32.573 "params": { 00:24:32.573 "name": "static" 00:24:32.573 } 00:24:32.573 } 00:24:32.573 ] 00:24:32.573 }, 00:24:32.573 { 00:24:32.573 "subsystem": "nvmf", 00:24:32.573 "config": [ 00:24:32.573 { 00:24:32.573 "method": "nvmf_set_config", 00:24:32.573 "params": { 00:24:32.573 "discovery_filter": "match_any", 00:24:32.573 "admin_cmd_passthru": { 00:24:32.573 "identify_ctrlr": false 00:24:32.573 } 00:24:32.573 } 00:24:32.573 }, 00:24:32.573 { 00:24:32.573 "method": "nvmf_set_max_subsystems", 00:24:32.573 "params": { 00:24:32.573 "max_subsystems": 1024 00:24:32.573 } 00:24:32.573 }, 00:24:32.573 { 00:24:32.573 "method": "nvmf_set_crdt", 00:24:32.573 "params": { 00:24:32.573 "crdt1": 0, 00:24:32.573 "crdt2": 0, 00:24:32.573 "crdt3": 0 00:24:32.573 } 00:24:32.573 }, 00:24:32.573 { 00:24:32.573 "method": "nvmf_create_transport", 00:24:32.573 "params": { 00:24:32.573 "trtype": "TCP", 00:24:32.573 "max_queue_depth": 128, 00:24:32.573 "max_io_qpairs_per_ctrlr": 127, 00:24:32.573 "in_capsule_data_size": 4096, 00:24:32.573 "max_io_size": 131072, 00:24:32.573 "io_unit_size": 131072, 00:24:32.573 "max_aq_depth": 128, 00:24:32.573 "num_shared_buffers": 511, 00:24:32.573 "buf_cache_size": 4294967295, 00:24:32.573 "dif_insert_or_strip": false, 00:24:32.573 "zcopy": false, 00:24:32.573 "c2h_success": false, 00:24:32.573 "sock_priority": 0, 00:24:32.573 "abort_timeout_sec": 1, 00:24:32.573 "ack_timeout": 0, 00:24:32.573 "data_wr_pool_size": 0 00:24:32.573 } 00:24:32.573 }, 00:24:32.573 { 00:24:32.573 "method": "nvmf_create_subsystem", 00:24:32.573 "params": { 00:24:32.573 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.573 "allow_any_host": false, 00:24:32.573 "serial_number": "00000000000000000000", 00:24:32.573 "model_number": "SPDK bdev Controller", 00:24:32.573 "max_namespaces": 32, 00:24:32.573 "min_cntlid": 1, 00:24:32.573 "max_cntlid": 65519, 00:24:32.573 "ana_reporting": false 00:24:32.573 } 00:24:32.573 }, 00:24:32.573 { 00:24:32.573 "method": "nvmf_subsystem_add_host", 00:24:32.573 "params": { 00:24:32.573 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.573 "host": "nqn.2016-06.io.spdk:host1", 00:24:32.573 "psk": "key0" 00:24:32.573 } 00:24:32.573 }, 00:24:32.573 { 00:24:32.573 "method": "nvmf_subsystem_add_ns", 00:24:32.573 "params": { 00:24:32.573 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.573 "namespace": { 00:24:32.573 "nsid": 1, 00:24:32.573 "bdev_name": "malloc0", 00:24:32.573 "nguid": "85D0090BC2C04A428BC19E649FD13B7B", 00:24:32.573 "uuid": "85d0090b-c2c0-4a42-8bc1-9e649fd13b7b", 00:24:32.573 "no_auto_visible": false 00:24:32.573 } 00:24:32.573 } 00:24:32.573 }, 00:24:32.573 { 00:24:32.573 "method": "nvmf_subsystem_add_listener", 00:24:32.573 "params": { 00:24:32.573 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.573 "listen_address": { 00:24:32.573 "trtype": "TCP", 00:24:32.573 "adrfam": "IPv4", 00:24:32.573 "traddr": "10.0.0.2", 00:24:32.573 "trsvcid": "4420" 00:24:32.573 }, 00:24:32.573 "secure_channel": true 00:24:32.573 } 00:24:32.573 } 00:24:32.573 ] 00:24:32.573 } 00:24:32.573 ] 00:24:32.573 }' 00:24:32.573 14:26:41 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:32.831 14:26:42 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:24:32.831 "subsystems": [ 00:24:32.831 { 00:24:32.831 "subsystem": "keyring", 00:24:32.831 "config": [ 00:24:32.831 { 00:24:32.831 "method": "keyring_file_add_key", 00:24:32.831 "params": { 00:24:32.831 "name": "key0", 00:24:32.831 "path": "/tmp/tmp.wNKYQGCZLl" 00:24:32.831 } 00:24:32.831 } 00:24:32.831 ] 00:24:32.831 }, 00:24:32.831 { 00:24:32.831 "subsystem": "iobuf", 00:24:32.831 "config": [ 00:24:32.831 { 00:24:32.831 "method": "iobuf_set_options", 00:24:32.831 "params": { 00:24:32.831 "small_pool_count": 8192, 00:24:32.831 "large_pool_count": 1024, 00:24:32.831 "small_bufsize": 8192, 00:24:32.831 "large_bufsize": 135168 00:24:32.831 } 00:24:32.831 } 00:24:32.831 ] 00:24:32.831 }, 00:24:32.831 { 00:24:32.831 "subsystem": "sock", 00:24:32.831 "config": [ 00:24:32.831 { 00:24:32.831 "method": "sock_set_default_impl", 00:24:32.831 "params": { 00:24:32.831 "impl_name": "posix" 00:24:32.831 } 00:24:32.831 }, 00:24:32.831 { 00:24:32.831 "method": "sock_impl_set_options", 00:24:32.831 "params": { 00:24:32.831 "impl_name": "ssl", 00:24:32.831 "recv_buf_size": 4096, 00:24:32.831 "send_buf_size": 4096, 00:24:32.831 "enable_recv_pipe": true, 00:24:32.831 "enable_quickack": false, 00:24:32.831 "enable_placement_id": 0, 00:24:32.831 "enable_zerocopy_send_server": true, 00:24:32.831 "enable_zerocopy_send_client": false, 00:24:32.831 "zerocopy_threshold": 0, 00:24:32.831 "tls_version": 0, 00:24:32.831 "enable_ktls": false 00:24:32.831 } 00:24:32.831 }, 00:24:32.831 { 00:24:32.831 "method": "sock_impl_set_options", 00:24:32.831 "params": { 00:24:32.831 "impl_name": "posix", 00:24:32.831 "recv_buf_size": 2097152, 00:24:32.831 "send_buf_size": 2097152, 00:24:32.831 "enable_recv_pipe": true, 00:24:32.831 "enable_quickack": false, 00:24:32.831 "enable_placement_id": 0, 00:24:32.831 "enable_zerocopy_send_server": true, 00:24:32.831 "enable_zerocopy_send_client": false, 00:24:32.831 "zerocopy_threshold": 0, 00:24:32.831 "tls_version": 0, 00:24:32.831 "enable_ktls": false 00:24:32.831 } 00:24:32.831 } 00:24:32.831 ] 00:24:32.831 }, 00:24:32.831 { 00:24:32.831 "subsystem": "vmd", 00:24:32.831 "config": [] 00:24:32.831 }, 00:24:32.831 { 00:24:32.831 "subsystem": "accel", 00:24:32.831 "config": [ 00:24:32.831 { 00:24:32.831 "method": "accel_set_options", 00:24:32.831 "params": { 00:24:32.831 "small_cache_size": 128, 00:24:32.831 "large_cache_size": 16, 00:24:32.831 "task_count": 2048, 00:24:32.831 "sequence_count": 2048, 00:24:32.831 "buf_count": 2048 00:24:32.831 } 00:24:32.831 } 00:24:32.831 ] 00:24:32.831 }, 00:24:32.831 { 00:24:32.831 "subsystem": "bdev", 00:24:32.831 "config": [ 00:24:32.831 { 00:24:32.831 "method": "bdev_set_options", 00:24:32.831 "params": { 00:24:32.831 "bdev_io_pool_size": 65535, 00:24:32.831 "bdev_io_cache_size": 256, 00:24:32.831 "bdev_auto_examine": true, 00:24:32.831 "iobuf_small_cache_size": 128, 00:24:32.831 "iobuf_large_cache_size": 16 00:24:32.831 } 00:24:32.831 }, 00:24:32.831 { 00:24:32.831 "method": "bdev_raid_set_options", 00:24:32.831 "params": { 00:24:32.831 "process_window_size_kb": 1024 00:24:32.831 } 00:24:32.831 }, 00:24:32.831 { 00:24:32.831 "method": "bdev_iscsi_set_options", 00:24:32.831 "params": { 00:24:32.831 "timeout_sec": 30 00:24:32.831 } 00:24:32.831 }, 00:24:32.831 { 00:24:32.831 "method": "bdev_nvme_set_options", 00:24:32.831 "params": { 00:24:32.831 "action_on_timeout": "none", 00:24:32.831 "timeout_us": 0, 00:24:32.831 "timeout_admin_us": 0, 00:24:32.831 "keep_alive_timeout_ms": 10000, 00:24:32.831 "arbitration_burst": 0, 00:24:32.831 "low_priority_weight": 0, 00:24:32.831 "medium_priority_weight": 0, 00:24:32.831 "high_priority_weight": 0, 00:24:32.831 "nvme_adminq_poll_period_us": 10000, 00:24:32.831 "nvme_ioq_poll_period_us": 0, 00:24:32.831 "io_queue_requests": 512, 00:24:32.831 "delay_cmd_submit": true, 00:24:32.831 "transport_retry_count": 4, 00:24:32.831 "bdev_retry_count": 3, 00:24:32.831 "transport_ack_timeout": 0, 00:24:32.831 "ctrlr_loss_timeout_sec": 0, 00:24:32.831 "reconnect_delay_sec": 0, 00:24:32.831 "fast_io_fail_timeout_sec": 0, 00:24:32.831 "disable_auto_failback": false, 00:24:32.831 "generate_uuids": false, 00:24:32.831 "transport_tos": 0, 00:24:32.831 "nvme_error_stat": false, 00:24:32.831 "rdma_srq_size": 0, 00:24:32.831 "io_path_stat": false, 00:24:32.831 "allow_accel_sequence": false, 00:24:32.831 "rdma_max_cq_size": 0, 00:24:32.831 "rdma_cm_event_timeout_ms": 0, 00:24:32.831 "dhchap_digests": [ 00:24:32.831 "sha256", 00:24:32.831 "sha384", 00:24:32.831 "sha512" 00:24:32.831 ], 00:24:32.831 "dhchap_dhgroups": [ 00:24:32.831 "null", 00:24:32.831 "ffdhe2048", 00:24:32.831 "ffdhe3072", 00:24:32.831 "ffdhe4096", 00:24:32.831 "ffdhe6144", 00:24:32.831 "ffdhe8192" 00:24:32.831 ] 00:24:32.831 } 00:24:32.831 }, 00:24:32.831 { 00:24:32.831 "method": "bdev_nvme_attach_controller", 00:24:32.831 "params": { 00:24:32.831 "name": "nvme0", 00:24:32.831 "trtype": "TCP", 00:24:32.831 "adrfam": "IPv4", 00:24:32.831 "traddr": "10.0.0.2", 00:24:32.831 "trsvcid": "4420", 00:24:32.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.831 "prchk_reftag": false, 00:24:32.831 "prchk_guard": false, 00:24:32.831 "ctrlr_loss_timeout_sec": 0, 00:24:32.831 "reconnect_delay_sec": 0, 00:24:32.831 "fast_io_fail_timeout_sec": 0, 00:24:32.831 "psk": "key0", 00:24:32.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:32.831 "hdgst": false, 00:24:32.831 "ddgst": false 00:24:32.831 } 00:24:32.831 }, 00:24:32.831 { 00:24:32.831 "method": "bdev_nvme_set_hotplug", 00:24:32.831 "params": { 00:24:32.831 "period_us": 100000, 00:24:32.831 "enable": false 00:24:32.831 } 00:24:32.831 }, 00:24:32.831 { 00:24:32.831 "method": "bdev_enable_histogram", 00:24:32.831 "params": { 00:24:32.831 "name": "nvme0n1", 00:24:32.831 "enable": true 00:24:32.831 } 00:24:32.831 }, 00:24:32.831 { 00:24:32.831 "method": "bdev_wait_for_examine" 00:24:32.831 } 00:24:32.831 ] 00:24:32.831 }, 00:24:32.831 { 00:24:32.831 "subsystem": "nbd", 00:24:32.831 "config": [] 00:24:32.831 } 00:24:32.831 ] 00:24:32.831 }' 00:24:32.831 14:26:42 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1429933 00:24:32.831 14:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1429933 ']' 00:24:32.831 14:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1429933 00:24:32.831 14:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:32.831 14:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:32.831 14:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1429933 00:24:32.831 14:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:32.831 14:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:32.831 14:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1429933' 00:24:32.831 killing process with pid 1429933 00:24:32.832 14:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1429933 00:24:32.832 Received shutdown signal, test time was about 1.000000 seconds 00:24:32.832 00:24:32.832 Latency(us) 00:24:32.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.832 =================================================================================================================== 00:24:32.832 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:32.832 14:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1429933 00:24:34.202 14:26:43 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1429785 00:24:34.202 14:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1429785 ']' 00:24:34.202 14:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1429785 00:24:34.202 14:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:34.202 14:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:34.202 14:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1429785 00:24:34.202 14:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:34.202 14:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:34.202 14:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1429785' 00:24:34.202 killing process with pid 1429785 00:24:34.202 14:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1429785 00:24:34.202 14:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1429785 00:24:35.571 14:26:44 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:24:35.571 14:26:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:35.571 14:26:44 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:24:35.571 "subsystems": [ 00:24:35.571 { 00:24:35.571 "subsystem": "keyring", 00:24:35.571 "config": [ 00:24:35.571 { 00:24:35.571 "method": "keyring_file_add_key", 00:24:35.571 "params": { 00:24:35.571 "name": "key0", 00:24:35.571 "path": "/tmp/tmp.wNKYQGCZLl" 00:24:35.571 } 00:24:35.571 } 00:24:35.571 ] 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "subsystem": "iobuf", 00:24:35.571 "config": [ 00:24:35.571 { 00:24:35.571 "method": "iobuf_set_options", 00:24:35.571 "params": { 00:24:35.571 "small_pool_count": 8192, 00:24:35.571 "large_pool_count": 1024, 00:24:35.571 "small_bufsize": 8192, 00:24:35.571 "large_bufsize": 135168 00:24:35.571 } 00:24:35.571 } 00:24:35.571 ] 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "subsystem": "sock", 00:24:35.571 "config": [ 00:24:35.571 { 00:24:35.571 "method": "sock_set_default_impl", 00:24:35.571 "params": { 00:24:35.571 "impl_name": "posix" 00:24:35.571 } 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "method": "sock_impl_set_options", 00:24:35.571 "params": { 00:24:35.571 "impl_name": "ssl", 00:24:35.571 "recv_buf_size": 4096, 00:24:35.571 "send_buf_size": 4096, 00:24:35.571 "enable_recv_pipe": true, 00:24:35.571 "enable_quickack": false, 00:24:35.571 "enable_placement_id": 0, 00:24:35.571 "enable_zerocopy_send_server": true, 00:24:35.571 "enable_zerocopy_send_client": false, 00:24:35.571 "zerocopy_threshold": 0, 00:24:35.571 "tls_version": 0, 00:24:35.571 "enable_ktls": false 00:24:35.571 } 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "method": "sock_impl_set_options", 00:24:35.571 "params": { 00:24:35.571 "impl_name": "posix", 00:24:35.571 "recv_buf_size": 2097152, 00:24:35.571 "send_buf_size": 2097152, 00:24:35.571 "enable_recv_pipe": true, 00:24:35.571 "enable_quickack": false, 00:24:35.571 "enable_placement_id": 0, 00:24:35.571 "enable_zerocopy_send_server": true, 00:24:35.571 "enable_zerocopy_send_client": false, 00:24:35.571 "zerocopy_threshold": 0, 00:24:35.571 "tls_version": 0, 00:24:35.571 "enable_ktls": false 00:24:35.571 } 00:24:35.571 } 00:24:35.571 ] 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "subsystem": "vmd", 00:24:35.571 "config": [] 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "subsystem": "accel", 00:24:35.571 "config": [ 00:24:35.571 { 00:24:35.571 "method": "accel_set_options", 00:24:35.571 "params": { 00:24:35.571 "small_cache_size": 128, 00:24:35.571 "large_cache_size": 16, 00:24:35.571 "task_count": 2048, 00:24:35.571 "sequence_count": 2048, 00:24:35.571 "buf_count": 2048 00:24:35.571 } 00:24:35.571 } 00:24:35.571 ] 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "subsystem": "bdev", 00:24:35.571 "config": [ 00:24:35.571 { 00:24:35.571 "method": "bdev_set_options", 00:24:35.571 "params": { 00:24:35.571 "bdev_io_pool_size": 65535, 00:24:35.571 "bdev_io_cache_size": 256, 00:24:35.571 "bdev_auto_examine": true, 00:24:35.571 "iobuf_small_cache_size": 128, 00:24:35.571 "iobuf_large_cache_size": 16 00:24:35.571 } 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "method": "bdev_raid_set_options", 00:24:35.571 "params": { 00:24:35.571 "process_window_size_kb": 1024 00:24:35.571 } 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "method": "bdev_iscsi_set_options", 00:24:35.571 "params": { 00:24:35.571 "timeout_sec": 30 00:24:35.571 } 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "method": "bdev_nvme_set_options", 00:24:35.571 "params": { 00:24:35.571 "action_on_timeout": "none", 00:24:35.571 "timeout_us": 0, 00:24:35.571 "timeout_admin_us": 0, 00:24:35.571 "keep_alive_timeout_ms": 10000, 00:24:35.571 "arbitration_burst": 0, 00:24:35.571 "low_priority_weight": 0, 00:24:35.571 "medium_priority_weight": 0, 00:24:35.571 "high_priority_weight": 0, 00:24:35.571 "nvme_adminq_poll_period_us": 10000, 00:24:35.571 "nvme_ioq_poll_period_us": 0, 00:24:35.571 "io_queue_requests": 0, 00:24:35.571 "delay_cmd_submit": true, 00:24:35.571 "transport_retry_count": 4, 00:24:35.571 "bdev_retry_count": 3, 00:24:35.571 "transport_ack_timeout": 0, 00:24:35.571 "ctrlr_loss_timeout_sec": 0, 00:24:35.571 "reconnect_delay_sec": 0, 00:24:35.571 "fast_io_fail_timeout_sec": 0, 00:24:35.571 "disable_auto_failback": false, 00:24:35.571 "generate_uuids": false, 00:24:35.571 "transport_tos": 0, 00:24:35.571 "nvme_error_stat": false, 00:24:35.571 "rdma_srq_size": 0, 00:24:35.571 "io_path_stat": false, 00:24:35.571 "allow_accel_sequence": false, 00:24:35.571 "rdma_max_cq_size": 0, 00:24:35.571 "rdma_cm_event_timeout_ms": 0, 00:24:35.571 "dhchap_digests": [ 00:24:35.571 "sha256", 00:24:35.571 "sha384", 00:24:35.571 "sha512" 00:24:35.571 ], 00:24:35.571 "dhchap_dhgroups": [ 00:24:35.571 "null", 00:24:35.571 "ffdhe2048", 00:24:35.571 "ffdhe3072", 00:24:35.571 "ffdhe4096", 00:24:35.571 "ffdhe6144", 00:24:35.571 "ffdhe8192" 00:24:35.571 ] 00:24:35.571 } 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "method": "bdev_nvme_set_hotplug", 00:24:35.571 "params": { 00:24:35.571 "period_us": 100000, 00:24:35.571 "enable": false 00:24:35.571 } 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "method": "bdev_malloc_create", 00:24:35.571 "params": { 00:24:35.571 "name": "malloc0", 00:24:35.571 "num_blocks": 8192, 00:24:35.571 "block_size": 4096, 00:24:35.571 "physical_block_size": 4096, 00:24:35.571 "uuid": "85d0090b-c2c0-4a42-8bc1-9e649fd13b7b", 00:24:35.571 "optimal_io_boundary": 0 00:24:35.571 } 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "method": "bdev_wait_for_examine" 00:24:35.571 } 00:24:35.571 ] 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "subsystem": "nbd", 00:24:35.571 "config": [] 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "subsystem": "scheduler", 00:24:35.571 "config": [ 00:24:35.571 { 00:24:35.571 "method": "framework_set_scheduler", 00:24:35.571 "params": { 00:24:35.571 "name": "static" 00:24:35.571 } 00:24:35.571 } 00:24:35.571 ] 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "subsystem": "nvmf", 00:24:35.571 "config": [ 00:24:35.571 { 00:24:35.571 "method": "nvmf_set_config", 00:24:35.571 "params": { 00:24:35.571 "discovery_filter": "match_any", 00:24:35.571 "admin_cmd_passthru": { 00:24:35.571 "identify_ctrlr": false 00:24:35.571 } 00:24:35.571 } 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "method": "nvmf_set_max_subsystems", 00:24:35.571 "params": { 00:24:35.571 "max_subsystems": 1024 00:24:35.571 } 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "method": "nvmf_set_crdt", 00:24:35.571 "params": { 00:24:35.571 "crdt1": 0, 00:24:35.571 "crdt2": 0, 00:24:35.571 "crdt3": 0 00:24:35.571 } 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "method": "nvmf_create_transport", 00:24:35.571 "params": { 00:24:35.571 "trtype": "TCP", 00:24:35.571 "max_queue_depth": 128, 00:24:35.571 "max_io_qpairs_per_ctrlr": 127, 00:24:35.571 "in_capsule_data_size": 4096, 00:24:35.571 "max_io_size": 131072, 00:24:35.571 "io_unit_size": 131072, 00:24:35.571 "max_aq_depth": 128, 00:24:35.571 "num_shared_buffers": 511, 00:24:35.571 "buf_cache_size": 4294967295, 00:24:35.571 "dif_insert_or_strip": false, 00:24:35.571 "zcopy": false, 00:24:35.571 "c2h_success": false, 00:24:35.571 "sock_priority": 0, 00:24:35.571 "abort_timeout_sec": 1, 00:24:35.571 "ack_timeout": 0, 00:24:35.571 "data_wr_pool_size": 0 00:24:35.571 } 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "method": "nvmf_create_subsystem", 00:24:35.571 "params": { 00:24:35.571 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.571 "allow_any_host": false, 00:24:35.571 "serial_number": "00000000000000000000", 00:24:35.571 "model_number": "SPDK bdev Controller", 00:24:35.571 "max_namespaces": 32, 00:24:35.571 "min_cntlid": 1, 00:24:35.571 "max_cntlid": 65519, 00:24:35.571 "ana_reporting": false 00:24:35.571 } 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "method": "nvmf_subsystem_add_host", 00:24:35.571 "params": { 00:24:35.571 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.571 "host": "nqn.2016-06.io.spdk:host1", 00:24:35.571 "psk": "key0" 00:24:35.571 } 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "method": "nvmf_subsystem_add_ns", 00:24:35.571 "params": { 00:24:35.571 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.571 "namespace": { 00:24:35.571 "nsid": 1, 00:24:35.571 "bdev_name": "malloc0", 00:24:35.571 "nguid": "85D0090BC2C04A428BC19E649FD13B7B", 00:24:35.571 "uuid": "85d0090b-c2c0-4a42-8bc1-9e649fd13b7b", 00:24:35.571 "no_auto_visible": false 00:24:35.571 } 00:24:35.571 } 00:24:35.571 }, 00:24:35.571 { 00:24:35.571 "method": "nvmf_subsystem_add_listener", 00:24:35.571 "params": { 00:24:35.571 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.571 "listen_address": { 00:24:35.571 "trtype": "TCP", 00:24:35.571 "adrfam": "IPv4", 00:24:35.571 "traddr": "10.0.0.2", 00:24:35.571 "trsvcid": "4420" 00:24:35.571 }, 00:24:35.571 "secure_channel": true 00:24:35.571 } 00:24:35.571 } 00:24:35.571 ] 00:24:35.571 } 00:24:35.571 ] 00:24:35.571 }' 00:24:35.571 14:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:35.571 14:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.571 14:26:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1430610 00:24:35.571 14:26:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:35.571 14:26:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1430610 00:24:35.571 14:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1430610 ']' 00:24:35.571 14:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.571 14:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:35.571 14:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.571 14:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:35.571 14:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.571 [2024-07-10 14:26:44.853468] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:24:35.571 [2024-07-10 14:26:44.853633] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.571 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.571 [2024-07-10 14:26:44.995158] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.899 [2024-07-10 14:26:45.255652] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.899 [2024-07-10 14:26:45.255729] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.899 [2024-07-10 14:26:45.255758] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.899 [2024-07-10 14:26:45.255783] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.899 [2024-07-10 14:26:45.255805] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.899 [2024-07-10 14:26:45.255942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.475 [2024-07-10 14:26:45.803712] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.475 [2024-07-10 14:26:45.835661] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:36.475 [2024-07-10 14:26:45.835938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.475 14:26:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.475 14:26:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:36.475 14:26:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:36.475 14:26:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:36.475 14:26:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.475 14:26:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.475 14:26:45 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1430762 00:24:36.475 14:26:45 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1430762 /var/tmp/bdevperf.sock 00:24:36.475 14:26:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1430762 ']' 00:24:36.475 14:26:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:36.475 14:26:45 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:36.475 14:26:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:36.475 14:26:45 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:24:36.475 "subsystems": [ 00:24:36.475 { 00:24:36.475 "subsystem": "keyring", 00:24:36.475 "config": [ 00:24:36.475 { 00:24:36.475 "method": "keyring_file_add_key", 00:24:36.475 "params": { 00:24:36.475 "name": "key0", 00:24:36.475 "path": "/tmp/tmp.wNKYQGCZLl" 00:24:36.475 } 00:24:36.475 } 00:24:36.475 ] 00:24:36.475 }, 00:24:36.475 { 00:24:36.475 "subsystem": "iobuf", 00:24:36.475 "config": [ 00:24:36.475 { 00:24:36.475 "method": "iobuf_set_options", 00:24:36.475 "params": { 00:24:36.475 "small_pool_count": 8192, 00:24:36.475 "large_pool_count": 1024, 00:24:36.475 "small_bufsize": 8192, 00:24:36.475 "large_bufsize": 135168 00:24:36.475 } 00:24:36.475 } 00:24:36.475 ] 00:24:36.475 }, 00:24:36.475 { 00:24:36.475 "subsystem": "sock", 00:24:36.475 "config": [ 00:24:36.475 { 00:24:36.475 "method": "sock_set_default_impl", 00:24:36.475 "params": { 00:24:36.475 "impl_name": "posix" 00:24:36.475 } 00:24:36.475 }, 00:24:36.475 { 00:24:36.475 "method": "sock_impl_set_options", 00:24:36.475 "params": { 00:24:36.475 "impl_name": "ssl", 00:24:36.475 "recv_buf_size": 4096, 00:24:36.475 "send_buf_size": 4096, 00:24:36.475 "enable_recv_pipe": true, 00:24:36.475 "enable_quickack": false, 00:24:36.475 "enable_placement_id": 0, 00:24:36.475 "enable_zerocopy_send_server": true, 00:24:36.475 "enable_zerocopy_send_client": false, 00:24:36.475 "zerocopy_threshold": 0, 00:24:36.475 "tls_version": 0, 00:24:36.475 "enable_ktls": false 00:24:36.475 } 00:24:36.475 }, 00:24:36.475 { 00:24:36.475 "method": "sock_impl_set_options", 00:24:36.475 "params": { 00:24:36.475 "impl_name": "posix", 00:24:36.475 "recv_buf_size": 2097152, 00:24:36.475 "send_buf_size": 2097152, 00:24:36.475 "enable_recv_pipe": true, 00:24:36.475 "enable_quickack": false, 00:24:36.475 "enable_placement_id": 0, 00:24:36.475 "enable_zerocopy_send_server": true, 00:24:36.475 "enable_zerocopy_send_client": false, 00:24:36.475 "zerocopy_threshold": 0, 00:24:36.475 "tls_version": 0, 00:24:36.475 "enable_ktls": false 00:24:36.475 } 00:24:36.475 } 00:24:36.475 ] 00:24:36.475 }, 00:24:36.475 { 00:24:36.475 "subsystem": "vmd", 00:24:36.475 "config": [] 00:24:36.475 }, 00:24:36.475 { 00:24:36.475 "subsystem": "accel", 00:24:36.475 "config": [ 00:24:36.475 { 00:24:36.475 "method": "accel_set_options", 00:24:36.475 "params": { 00:24:36.475 "small_cache_size": 128, 00:24:36.475 "large_cache_size": 16, 00:24:36.475 "task_count": 2048, 00:24:36.475 "sequence_count": 2048, 00:24:36.475 "buf_count": 2048 00:24:36.475 } 00:24:36.475 } 00:24:36.475 ] 00:24:36.475 }, 00:24:36.475 { 00:24:36.475 "subsystem": "bdev", 00:24:36.475 "config": [ 00:24:36.475 { 00:24:36.475 "method": "bdev_set_options", 00:24:36.475 "params": { 00:24:36.475 "bdev_io_pool_size": 65535, 00:24:36.475 "bdev_io_cache_size": 256, 00:24:36.475 "bdev_auto_examine": true, 00:24:36.475 "iobuf_small_cache_size": 128, 00:24:36.475 "iobuf_large_cache_size": 16 00:24:36.475 } 00:24:36.475 }, 00:24:36.475 { 00:24:36.475 "method": "bdev_raid_set_options", 00:24:36.475 "params": { 00:24:36.475 "process_window_size_kb": 1024 00:24:36.475 } 00:24:36.475 }, 00:24:36.475 { 00:24:36.475 "method": "bdev_iscsi_set_options", 00:24:36.475 "params": { 00:24:36.475 "timeout_sec": 30 00:24:36.475 } 00:24:36.475 }, 00:24:36.475 { 00:24:36.475 "method": "bdev_nvme_set_options", 00:24:36.475 "params": { 00:24:36.475 "action_on_timeout": "none", 00:24:36.475 "timeout_us": 0, 00:24:36.475 "timeout_admin_us": 0, 00:24:36.475 "keep_alive_timeout_ms": 10000, 00:24:36.475 "arbitration_burst": 0, 00:24:36.475 "low_priority_weight": 0, 00:24:36.475 "medium_priority_weight": 0, 00:24:36.475 "high_priority_weight": 0, 00:24:36.475 "nvme_adminq_poll_period_us": 10000, 00:24:36.475 "nvme_ioq_poll_period_us": 0, 00:24:36.475 "io_queue_requests": 512, 00:24:36.475 "delay_cmd_submit": true, 00:24:36.475 "transport_retry_count": 4, 00:24:36.475 "bdev_retry_count": 3, 00:24:36.475 "transport_ack_timeout": 0, 00:24:36.475 "ctrlr_loss_timeout_sec": 0, 00:24:36.475 "reconnect_delay_sec": 0, 00:24:36.475 "fast_io_fail_timeout_sec": 0, 00:24:36.475 "disable_auto_failback": false, 00:24:36.475 "generate_uuids": false, 00:24:36.475 "transport_tos": 0, 00:24:36.475 "nvme_error_stat": false, 00:24:36.475 "rdma_srq_size": 0, 00:24:36.475 "io_path_stat": false, 00:24:36.475 "allow_accel_sequence": false, 00:24:36.475 "rdma_max_cq_size": 0, 00:24:36.475 "rdma_cm_event_timeout_ms": 0, 00:24:36.475 "dhchap_digests": [ 00:24:36.475 "sha256", 00:24:36.476 "sha384", 00:24:36.476 "sha512" 00:24:36.476 ], 00:24:36.476 "dhchap_dhgroups": [ 00:24:36.476 "null", 00:24:36.476 "ffdhe2048", 00:24:36.476 "ffdhe3072", 00:24:36.476 "ffdhe4096", 00:24:36.476 "ffdhe6144", 00:24:36.476 "ffdhe8192" 00:24:36.476 ] 00:24:36.476 } 00:24:36.476 }, 00:24:36.476 { 00:24:36.476 "method": "bdev_nvme_attach_controller", 00:24:36.476 "params": { 00:24:36.476 "name": "nvme0", 00:24:36.476 "trtype": "TCP", 00:24:36.476 "adrfam": "IPv4", 00:24:36.476 "traddr": "10.0.0.2", 00:24:36.476 "trsvcid": "4420", 00:24:36.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.476 "prchk_reftag": false, 00:24:36.476 "prchk_guard": false, 00:24:36.476 "ctrlr_loss_timeout_sec": 0, 00:24:36.476 "reconnect_delay_sec": 0, 00:24:36.476 "fast_io_fail_timeout_sec": 0, 00:24:36.476 "psk": "key0", 00:24:36.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:36.476 "hdgst": false, 00:24:36.476 "ddgst": false 00:24:36.476 } 00:24:36.476 }, 00:24:36.476 { 00:24:36.476 "method": "bdev_nvme_set_hotplug", 00:24:36.476 "params": { 00:24:36.476 "period_us": 100000, 00:24:36.476 "enable": false 00:24:36.476 } 00:24:36.476 }, 00:24:36.476 { 00:24:36.476 "method": "bdev_enable_histogram", 00:24:36.476 "params": { 00:24:36.476 "name": "nvme0n1", 00:24:36.476 "enable": true 00:24:36.476 } 00:24:36.476 }, 00:24:36.476 { 00:24:36.476 "method": "bdev_wait_for_examine" 00:24:36.476 } 00:24:36.476 ] 00:24:36.476 }, 00:24:36.476 { 00:24:36.476 "subsystem": "nbd", 00:24:36.476 "config": [] 00:24:36.476 } 00:24:36.476 ] 00:24:36.476 }' 00:24:36.476 14:26:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:36.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:36.476 14:26:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:36.476 14:26:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.733 [2024-07-10 14:26:45.972730] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:24:36.733 [2024-07-10 14:26:45.972887] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1430762 ] 00:24:36.733 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.733 [2024-07-10 14:26:46.101646] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.991 [2024-07-10 14:26:46.354738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.556 [2024-07-10 14:26:46.787638] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:37.556 14:26:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:37.556 14:26:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:37.556 14:26:46 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:37.556 14:26:46 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:24:37.813 14:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.813 14:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:37.813 Running I/O for 1 seconds... 00:24:39.184 00:24:39.184 Latency(us) 00:24:39.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.184 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:39.184 Verification LBA range: start 0x0 length 0x2000 00:24:39.184 nvme0n1 : 1.05 2404.43 9.39 0.00 0.00 52086.04 8738.13 79614.10 00:24:39.184 =================================================================================================================== 00:24:39.184 Total : 2404.43 9.39 0.00 0.00 52086.04 8738.13 79614.10 00:24:39.184 0 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:39.184 nvmf_trace.0 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1430762 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1430762 ']' 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1430762 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1430762 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1430762' 00:24:39.184 killing process with pid 1430762 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1430762 00:24:39.184 Received shutdown signal, test time was about 1.000000 seconds 00:24:39.184 00:24:39.184 Latency(us) 00:24:39.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.184 =================================================================================================================== 00:24:39.184 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:39.184 14:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1430762 00:24:40.117 14:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:40.118 14:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:40.118 14:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:24:40.118 14:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:40.118 14:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:24:40.118 14:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:40.118 14:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:40.118 rmmod nvme_tcp 00:24:40.118 rmmod nvme_fabrics 00:24:40.118 rmmod nvme_keyring 00:24:40.118 14:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:40.118 14:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:24:40.118 14:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:24:40.118 14:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1430610 ']' 00:24:40.118 14:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1430610 00:24:40.118 14:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1430610 ']' 00:24:40.118 14:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1430610 00:24:40.118 14:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:40.118 14:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:40.118 14:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1430610 00:24:40.375 14:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:40.375 14:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:40.375 14:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1430610' 00:24:40.375 killing process with pid 1430610 00:24:40.375 14:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1430610 00:24:40.375 14:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1430610 00:24:41.747 14:26:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:41.747 14:26:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:41.747 14:26:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:41.747 14:26:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:41.747 14:26:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:41.747 14:26:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.747 14:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:41.747 14:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.642 14:26:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:43.642 14:26:53 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.bDdsj4xPUp /tmp/tmp.ntmXNNvI4R /tmp/tmp.wNKYQGCZLl 00:24:43.642 00:24:43.642 real 1m50.366s 00:24:43.642 user 3m0.059s 00:24:43.642 sys 0m26.512s 00:24:43.642 14:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:43.642 14:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.642 ************************************ 00:24:43.642 END TEST nvmf_tls 00:24:43.642 ************************************ 00:24:43.642 14:26:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:43.642 14:26:53 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:43.642 14:26:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:43.642 14:26:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:43.642 14:26:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:43.899 ************************************ 00:24:43.899 START TEST nvmf_fips 00:24:43.899 ************************************ 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:43.899 * Looking for test storage... 00:24:43.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:24:43.899 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:24:43.900 Error setting digest 00:24:43.900 00C2CAA93A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:24:43.900 00C2CAA93A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:24:43.900 14:26:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:45.797 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:45.798 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:45.798 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:45.798 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:45.798 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:45.798 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:45.799 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.056 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:46.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:24:46.057 00:24:46.057 --- 10.0.0.2 ping statistics --- 00:24:46.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.057 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:24:46.057 00:24:46.057 --- 10.0.0.1 ping statistics --- 00:24:46.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.057 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1433264 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1433264 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1433264 ']' 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:46.057 14:26:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:46.057 [2024-07-10 14:26:55.466152] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:24:46.057 [2024-07-10 14:26:55.466286] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.314 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.314 [2024-07-10 14:26:55.609666] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.572 [2024-07-10 14:26:55.867480] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.572 [2024-07-10 14:26:55.867562] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.572 [2024-07-10 14:26:55.867590] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.572 [2024-07-10 14:26:55.867615] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.572 [2024-07-10 14:26:55.867637] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.573 [2024-07-10 14:26:55.867693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.139 14:26:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:47.139 14:26:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:47.139 14:26:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:47.139 14:26:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:47.139 14:26:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:47.139 14:26:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.139 14:26:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:47.139 14:26:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:47.139 14:26:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:47.139 14:26:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:47.139 14:26:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:47.139 14:26:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:47.139 14:26:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:47.139 14:26:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:47.397 [2024-07-10 14:26:56.635600] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.397 [2024-07-10 14:26:56.651535] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:47.397 [2024-07-10 14:26:56.651824] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.397 [2024-07-10 14:26:56.725803] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:47.397 malloc0 00:24:47.397 14:26:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:47.397 14:26:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1433534 00:24:47.397 14:26:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:47.397 14:26:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1433534 /var/tmp/bdevperf.sock 00:24:47.397 14:26:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1433534 ']' 00:24:47.397 14:26:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.397 14:26:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:47.397 14:26:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.397 14:26:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:47.397 14:26:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:47.397 [2024-07-10 14:26:56.864528] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:24:47.397 [2024-07-10 14:26:56.864673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1433534 ] 00:24:47.655 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.655 [2024-07-10 14:26:56.985747] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.912 [2024-07-10 14:26:57.216834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:48.477 14:26:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:48.477 14:26:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:48.477 14:26:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:48.734 [2024-07-10 14:26:58.011075] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:48.734 [2024-07-10 14:26:58.011259] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:48.734 TLSTESTn1 00:24:48.735 14:26:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:48.992 Running I/O for 10 seconds... 00:24:58.947 00:24:58.947 Latency(us) 00:24:58.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.947 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:58.947 Verification LBA range: start 0x0 length 0x2000 00:24:58.947 TLSTESTn1 : 10.05 2324.76 9.08 0.00 0.00 54903.02 7815.77 76507.21 00:24:58.947 =================================================================================================================== 00:24:58.947 Total : 2324.76 9.08 0.00 0.00 54903.02 7815.77 76507.21 00:24:58.947 0 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:58.947 nvmf_trace.0 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1433534 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1433534 ']' 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1433534 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1433534 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1433534' 00:24:58.947 killing process with pid 1433534 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1433534 00:24:58.947 Received shutdown signal, test time was about 10.000000 seconds 00:24:58.947 00:24:58.947 Latency(us) 00:24:58.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.947 =================================================================================================================== 00:24:58.947 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:58.947 [2024-07-10 14:27:08.405603] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:58.947 14:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1433534 00:25:00.318 14:27:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:00.318 14:27:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:00.318 14:27:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:25:00.318 14:27:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:00.318 14:27:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:25:00.318 14:27:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:00.318 14:27:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:00.318 rmmod nvme_tcp 00:25:00.318 rmmod nvme_fabrics 00:25:00.318 rmmod nvme_keyring 00:25:00.318 14:27:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:00.318 14:27:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:25:00.318 14:27:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:25:00.318 14:27:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1433264 ']' 00:25:00.318 14:27:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1433264 00:25:00.318 14:27:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1433264 ']' 00:25:00.318 14:27:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1433264 00:25:00.318 14:27:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:25:00.318 14:27:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:00.318 14:27:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1433264 00:25:00.318 14:27:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:00.318 14:27:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:00.318 14:27:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1433264' 00:25:00.318 killing process with pid 1433264 00:25:00.318 14:27:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1433264 00:25:00.318 [2024-07-10 14:27:09.452476] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for 14:27:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1433264 00:25:00.318 removal in v24.09 hit 1 times 00:25:01.691 14:27:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:01.691 14:27:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:01.691 14:27:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:01.691 14:27:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:01.691 14:27:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:01.691 14:27:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.691 14:27:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:01.691 14:27:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.594 14:27:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:03.594 14:27:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:03.594 00:25:03.594 real 0m19.838s 00:25:03.594 user 0m26.566s 00:25:03.594 sys 0m5.502s 00:25:03.594 14:27:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:03.594 14:27:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:03.594 ************************************ 00:25:03.594 END TEST nvmf_fips 00:25:03.594 ************************************ 00:25:03.594 14:27:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:03.594 14:27:12 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:25:03.594 14:27:12 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:03.594 14:27:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:03.594 14:27:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:03.594 14:27:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:03.594 ************************************ 00:25:03.594 START TEST nvmf_fuzz 00:25:03.594 ************************************ 00:25:03.594 14:27:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:03.852 * Looking for test storage... 00:25:03.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.852 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:03.853 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:03.853 14:27:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:25:03.853 14:27:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:05.755 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.755 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:05.756 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:05.756 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:05.756 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.756 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:06.014 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:06.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:06.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:25:06.014 00:25:06.014 --- 10.0.0.2 ping statistics --- 00:25:06.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.014 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:25:06.014 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:06.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:06.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:25:06.014 00:25:06.014 --- 10.0.0.1 ping statistics --- 00:25:06.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.014 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:25:06.014 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:06.014 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:25:06.014 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:06.014 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:06.014 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:06.014 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:06.014 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:06.014 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:06.014 14:27:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:06.014 14:27:15 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1437669 00:25:06.014 14:27:15 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:06.014 14:27:15 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:06.014 14:27:15 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1437669 00:25:06.014 14:27:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1437669 ']' 00:25:06.014 14:27:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.014 14:27:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:06.014 14:27:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.014 14:27:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:06.014 14:27:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:06.947 Malloc0 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:06.947 14:27:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:39.016 Fuzzing completed. Shutting down the fuzz application 00:25:39.016 00:25:39.016 Dumping successful admin opcodes: 00:25:39.016 8, 9, 10, 24, 00:25:39.016 Dumping successful io opcodes: 00:25:39.016 0, 9, 00:25:39.016 NS: 0x200003aefec0 I/O qp, Total commands completed: 327045, total successful commands: 1935, random_seed: 1863836032 00:25:39.016 NS: 0x200003aefec0 admin qp, Total commands completed: 41200, total successful commands: 337, random_seed: 625622848 00:25:39.016 14:27:47 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:40.445 Fuzzing completed. Shutting down the fuzz application 00:25:40.445 00:25:40.445 Dumping successful admin opcodes: 00:25:40.445 24, 00:25:40.445 Dumping successful io opcodes: 00:25:40.445 00:25:40.445 NS: 0x200003aefec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2989175366 00:25:40.445 NS: 0x200003aefec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2989369626 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:40.445 rmmod nvme_tcp 00:25:40.445 rmmod nvme_fabrics 00:25:40.445 rmmod nvme_keyring 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1437669 ']' 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1437669 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1437669 ']' 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 1437669 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1437669 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1437669' 00:25:40.445 killing process with pid 1437669 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 1437669 00:25:40.445 14:27:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 1437669 00:25:41.819 14:27:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:41.819 14:27:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:41.819 14:27:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:41.819 14:27:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:41.819 14:27:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:41.819 14:27:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.819 14:27:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:41.819 14:27:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.352 14:27:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:44.352 14:27:53 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:44.352 00:25:44.352 real 0m40.283s 00:25:44.352 user 0m57.705s 00:25:44.352 sys 0m13.742s 00:25:44.352 14:27:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:44.352 14:27:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:44.352 ************************************ 00:25:44.352 END TEST nvmf_fuzz 00:25:44.352 ************************************ 00:25:44.352 14:27:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:44.352 14:27:53 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:44.352 14:27:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:44.352 14:27:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:44.352 14:27:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:44.352 ************************************ 00:25:44.352 START TEST nvmf_multiconnection 00:25:44.352 ************************************ 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:44.352 * Looking for test storage... 00:25:44.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:25:44.352 14:27:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:46.254 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:46.255 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:46.255 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:46.255 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:46.255 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:46.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:25:46.255 00:25:46.255 --- 10.0.0.2 ping statistics --- 00:25:46.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.255 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:46.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:25:46.255 00:25:46.255 --- 10.0.0.1 ping statistics --- 00:25:46.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.255 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1443657 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1443657 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 1443657 ']' 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:46.255 14:27:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.256 14:27:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:46.256 14:27:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:46.513 [2024-07-10 14:27:55.735026] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:25:46.513 [2024-07-10 14:27:55.735167] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.513 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.513 [2024-07-10 14:27:55.879031] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:46.772 [2024-07-10 14:27:56.145632] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.772 [2024-07-10 14:27:56.145709] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.772 [2024-07-10 14:27:56.145737] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.772 [2024-07-10 14:27:56.145759] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.772 [2024-07-10 14:27:56.145789] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.772 [2024-07-10 14:27:56.145918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.772 [2024-07-10 14:27:56.145991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:46.772 [2024-07-10 14:27:56.146067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.772 [2024-07-10 14:27:56.146077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:47.338 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:47.338 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:25:47.338 14:27:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:47.338 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:47.338 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.338 14:27:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.338 14:27:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:47.338 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.338 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.338 [2024-07-10 14:27:56.753998] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.338 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.338 14:27:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:47.338 14:27:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.338 14:27:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:47.338 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.338 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.597 Malloc1 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.597 [2024-07-10 14:27:56.864176] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.597 Malloc2 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.597 14:27:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.597 Malloc3 00:25:47.597 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.597 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:47.597 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.597 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.597 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.597 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:47.597 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.597 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.597 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.597 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:47.597 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.597 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.597 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.597 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.597 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:47.597 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.597 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.856 Malloc4 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.856 Malloc5 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.856 Malloc6 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.856 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.115 Malloc7 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.115 Malloc8 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.115 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.374 Malloc9 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.374 Malloc10 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.374 Malloc11 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.374 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.632 14:27:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.632 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:48.632 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.632 14:27:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:49.200 14:27:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:49.200 14:27:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:49.200 14:27:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:49.200 14:27:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:49.200 14:27:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:51.099 14:28:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:51.099 14:28:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:51.100 14:28:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:51.100 14:28:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:51.100 14:28:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:51.100 14:28:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:51.100 14:28:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.100 14:28:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:52.037 14:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:52.037 14:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:52.037 14:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:52.037 14:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:52.037 14:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:53.934 14:28:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:53.934 14:28:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:53.934 14:28:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:25:53.934 14:28:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:53.934 14:28:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:53.934 14:28:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:53.934 14:28:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.934 14:28:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:54.867 14:28:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:54.867 14:28:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:54.867 14:28:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:54.867 14:28:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:54.867 14:28:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:56.767 14:28:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:56.767 14:28:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:56.767 14:28:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:25:56.767 14:28:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:56.767 14:28:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:56.767 14:28:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:56.767 14:28:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.767 14:28:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:57.333 14:28:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:57.333 14:28:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:57.333 14:28:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:57.333 14:28:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:57.333 14:28:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:59.233 14:28:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:59.233 14:28:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:59.233 14:28:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:25:59.233 14:28:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:59.233 14:28:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:59.233 14:28:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:59.233 14:28:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.233 14:28:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:00.167 14:28:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:00.167 14:28:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:00.167 14:28:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:00.167 14:28:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:00.167 14:28:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:02.066 14:28:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:02.066 14:28:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:02.066 14:28:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:26:02.066 14:28:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:02.066 14:28:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:02.066 14:28:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:02.066 14:28:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:02.066 14:28:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:02.632 14:28:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:02.633 14:28:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:02.633 14:28:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:02.633 14:28:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:02.633 14:28:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:05.160 14:28:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:05.160 14:28:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:05.160 14:28:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:26:05.160 14:28:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:05.160 14:28:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:05.160 14:28:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:05.160 14:28:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.160 14:28:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:05.418 14:28:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:05.418 14:28:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:05.418 14:28:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:05.418 14:28:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:05.418 14:28:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:07.944 14:28:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:07.944 14:28:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:07.944 14:28:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:26:07.944 14:28:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:07.944 14:28:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:07.944 14:28:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:07.944 14:28:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.944 14:28:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:08.201 14:28:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:08.201 14:28:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:08.201 14:28:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:08.201 14:28:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:08.201 14:28:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:10.098 14:28:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:10.098 14:28:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:10.098 14:28:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:26:10.098 14:28:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:10.098 14:28:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:10.098 14:28:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:10.098 14:28:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.098 14:28:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:11.031 14:28:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:11.031 14:28:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:11.031 14:28:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:11.031 14:28:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:11.031 14:28:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:13.557 14:28:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:13.557 14:28:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:13.557 14:28:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:26:13.557 14:28:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:13.557 14:28:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:13.557 14:28:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:13.557 14:28:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.557 14:28:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:14.123 14:28:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:14.123 14:28:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:14.123 14:28:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:14.123 14:28:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:14.123 14:28:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:16.078 14:28:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:16.078 14:28:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:16.078 14:28:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:26:16.078 14:28:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:16.078 14:28:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:16.078 14:28:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:16.078 14:28:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.078 14:28:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:17.008 14:28:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:17.008 14:28:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:17.008 14:28:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:17.008 14:28:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:17.008 14:28:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:18.905 14:28:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:18.905 14:28:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:18.905 14:28:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:26:18.905 14:28:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:18.905 14:28:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:18.905 14:28:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:18.905 14:28:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:18.905 [global] 00:26:18.905 thread=1 00:26:18.905 invalidate=1 00:26:18.905 rw=read 00:26:18.905 time_based=1 00:26:18.905 runtime=10 00:26:18.905 ioengine=libaio 00:26:18.905 direct=1 00:26:18.905 bs=262144 00:26:18.905 iodepth=64 00:26:18.905 norandommap=1 00:26:18.905 numjobs=1 00:26:18.905 00:26:18.905 [job0] 00:26:18.905 filename=/dev/nvme0n1 00:26:18.905 [job1] 00:26:18.905 filename=/dev/nvme10n1 00:26:18.905 [job2] 00:26:18.905 filename=/dev/nvme1n1 00:26:18.905 [job3] 00:26:18.905 filename=/dev/nvme2n1 00:26:18.905 [job4] 00:26:18.905 filename=/dev/nvme3n1 00:26:18.905 [job5] 00:26:18.905 filename=/dev/nvme4n1 00:26:18.905 [job6] 00:26:18.905 filename=/dev/nvme5n1 00:26:18.905 [job7] 00:26:18.905 filename=/dev/nvme6n1 00:26:18.905 [job8] 00:26:18.905 filename=/dev/nvme7n1 00:26:18.905 [job9] 00:26:18.905 filename=/dev/nvme8n1 00:26:18.905 [job10] 00:26:18.905 filename=/dev/nvme9n1 00:26:19.163 Could not set queue depth (nvme0n1) 00:26:19.163 Could not set queue depth (nvme10n1) 00:26:19.163 Could not set queue depth (nvme1n1) 00:26:19.163 Could not set queue depth (nvme2n1) 00:26:19.163 Could not set queue depth (nvme3n1) 00:26:19.163 Could not set queue depth (nvme4n1) 00:26:19.163 Could not set queue depth (nvme5n1) 00:26:19.163 Could not set queue depth (nvme6n1) 00:26:19.163 Could not set queue depth (nvme7n1) 00:26:19.163 Could not set queue depth (nvme8n1) 00:26:19.163 Could not set queue depth (nvme9n1) 00:26:19.163 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.163 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.163 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.163 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.163 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.163 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.163 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.163 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.163 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.163 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.163 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.163 fio-3.35 00:26:19.163 Starting 11 threads 00:26:31.369 00:26:31.369 job0: (groupid=0, jobs=1): err= 0: pid=1448031: Wed Jul 10 14:28:39 2024 00:26:31.369 read: IOPS=612, BW=153MiB/s (160MB/s)(1539MiB/10059msec) 00:26:31.369 slat (usec): min=8, max=117323, avg=1390.58, stdev=5290.51 00:26:31.369 clat (msec): min=2, max=385, avg=103.11, stdev=55.43 00:26:31.369 lat (msec): min=2, max=385, avg=104.50, stdev=56.00 00:26:31.369 clat percentiles (msec): 00:26:31.369 | 1.00th=[ 7], 5.00th=[ 22], 10.00th=[ 42], 20.00th=[ 65], 00:26:31.369 | 30.00th=[ 81], 40.00th=[ 90], 50.00th=[ 99], 60.00th=[ 105], 00:26:31.369 | 70.00th=[ 113], 80.00th=[ 131], 90.00th=[ 161], 95.00th=[ 220], 00:26:31.369 | 99.00th=[ 300], 99.50th=[ 317], 99.90th=[ 334], 99.95th=[ 334], 00:26:31.369 | 99.99th=[ 384] 00:26:31.369 bw ( KiB/s): min=81920, max=243712, per=11.72%, avg=156017.90, stdev=45462.17, samples=20 00:26:31.369 iops : min= 320, max= 952, avg=609.40, stdev=177.63, samples=20 00:26:31.369 lat (msec) : 4=0.03%, 10=2.29%, 20=2.42%, 50=7.73%, 100=41.58% 00:26:31.369 lat (msec) : 250=42.55%, 500=3.39% 00:26:31.369 cpu : usr=0.33%, sys=2.03%, ctx=1186, majf=0, minf=4097 00:26:31.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:31.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:31.369 issued rwts: total=6157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.369 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:31.369 job1: (groupid=0, jobs=1): err= 0: pid=1448038: Wed Jul 10 14:28:39 2024 00:26:31.369 read: IOPS=489, BW=122MiB/s (128MB/s)(1232MiB/10059msec) 00:26:31.369 slat (usec): min=8, max=189050, avg=1417.39, stdev=7395.82 00:26:31.369 clat (usec): min=1419, max=446676, avg=129139.37, stdev=82336.57 00:26:31.369 lat (usec): min=1474, max=470010, avg=130556.76, stdev=83374.66 00:26:31.369 clat percentiles (msec): 00:26:31.369 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 22], 20.00th=[ 48], 00:26:31.369 | 30.00th=[ 77], 40.00th=[ 91], 50.00th=[ 113], 60.00th=[ 153], 00:26:31.369 | 70.00th=[ 197], 80.00th=[ 215], 90.00th=[ 232], 95.00th=[ 247], 00:26:31.369 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 376], 99.95th=[ 384], 00:26:31.369 | 99.99th=[ 447] 00:26:31.369 bw ( KiB/s): min=64512, max=193536, per=9.36%, avg=124544.00, stdev=39728.82, samples=20 00:26:31.369 iops : min= 252, max= 756, avg=486.50, stdev=155.19, samples=20 00:26:31.369 lat (msec) : 2=0.08%, 4=0.30%, 10=3.77%, 20=5.07%, 50=11.34% 00:26:31.369 lat (msec) : 100=24.84%, 250=50.37%, 500=4.22% 00:26:31.369 cpu : usr=0.30%, sys=1.70%, ctx=1076, majf=0, minf=4097 00:26:31.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:26:31.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:31.369 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.369 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:31.369 job2: (groupid=0, jobs=1): err= 0: pid=1448058: Wed Jul 10 14:28:39 2024 00:26:31.369 read: IOPS=557, BW=139MiB/s (146MB/s)(1398MiB/10032msec) 00:26:31.369 slat (usec): min=9, max=178287, avg=1136.39, stdev=8095.15 00:26:31.369 clat (usec): min=1181, max=371859, avg=113630.52, stdev=86961.49 00:26:31.369 lat (usec): min=1228, max=392801, avg=114766.91, stdev=88078.17 00:26:31.369 clat percentiles (msec): 00:26:31.369 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 14], 20.00th=[ 24], 00:26:31.369 | 30.00th=[ 41], 40.00th=[ 57], 50.00th=[ 99], 60.00th=[ 134], 00:26:31.369 | 70.00th=[ 192], 80.00th=[ 211], 90.00th=[ 232], 95.00th=[ 249], 00:26:31.369 | 99.00th=[ 271], 99.50th=[ 288], 99.90th=[ 359], 99.95th=[ 359], 00:26:31.369 | 99.99th=[ 372] 00:26:31.369 bw ( KiB/s): min=63102, max=275456, per=10.63%, avg=141497.50, stdev=59492.17, samples=20 00:26:31.369 iops : min= 246, max= 1076, avg=552.70, stdev=232.43, samples=20 00:26:31.369 lat (msec) : 2=0.73%, 4=1.16%, 10=5.62%, 20=9.30%, 50=18.99% 00:26:31.369 lat (msec) : 100=14.61%, 250=44.64%, 500=4.94% 00:26:31.370 cpu : usr=0.25%, sys=1.55%, ctx=1193, majf=0, minf=4097 00:26:31.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:31.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:31.370 issued rwts: total=5591,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:31.370 job3: (groupid=0, jobs=1): err= 0: pid=1448069: Wed Jul 10 14:28:39 2024 00:26:31.370 read: IOPS=425, BW=106MiB/s (112MB/s)(1070MiB/10059msec) 00:26:31.370 slat (usec): min=13, max=124129, avg=2145.92, stdev=6902.71 00:26:31.370 clat (msec): min=10, max=332, avg=148.16, stdev=56.56 00:26:31.370 lat (msec): min=10, max=352, avg=150.31, stdev=57.48 00:26:31.370 clat percentiles (msec): 00:26:31.370 | 1.00th=[ 43], 5.00th=[ 73], 10.00th=[ 86], 20.00th=[ 102], 00:26:31.370 | 30.00th=[ 114], 40.00th=[ 124], 50.00th=[ 133], 60.00th=[ 146], 00:26:31.370 | 70.00th=[ 186], 80.00th=[ 207], 90.00th=[ 228], 95.00th=[ 245], 00:26:31.370 | 99.00th=[ 296], 99.50th=[ 300], 99.90th=[ 326], 99.95th=[ 330], 00:26:31.370 | 99.99th=[ 334] 00:26:31.370 bw ( KiB/s): min=62976, max=172544, per=8.11%, avg=107955.20, stdev=30609.44, samples=20 00:26:31.370 iops : min= 246, max= 674, avg=421.70, stdev=119.57, samples=20 00:26:31.370 lat (msec) : 20=0.23%, 50=1.47%, 100=16.87%, 250=77.31%, 500=4.11% 00:26:31.370 cpu : usr=0.26%, sys=1.51%, ctx=882, majf=0, minf=4097 00:26:31.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:31.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:31.370 issued rwts: total=4280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:31.370 job4: (groupid=0, jobs=1): err= 0: pid=1448076: Wed Jul 10 14:28:39 2024 00:26:31.370 read: IOPS=318, BW=79.7MiB/s (83.6MB/s)(814MiB/10210msec) 00:26:31.370 slat (usec): min=13, max=122780, avg=3066.57, stdev=8400.02 00:26:31.370 clat (msec): min=53, max=476, avg=197.51, stdev=49.91 00:26:31.370 lat (msec): min=54, max=477, avg=200.58, stdev=50.90 00:26:31.370 clat percentiles (msec): 00:26:31.370 | 1.00th=[ 85], 5.00th=[ 110], 10.00th=[ 125], 20.00th=[ 153], 00:26:31.370 | 30.00th=[ 188], 40.00th=[ 199], 50.00th=[ 207], 60.00th=[ 213], 00:26:31.370 | 70.00th=[ 220], 80.00th=[ 228], 90.00th=[ 243], 95.00th=[ 262], 00:26:31.370 | 99.00th=[ 317], 99.50th=[ 397], 99.90th=[ 477], 99.95th=[ 477], 00:26:31.370 | 99.99th=[ 477] 00:26:31.370 bw ( KiB/s): min=61952, max=128000, per=6.14%, avg=81715.20, stdev=17747.40, samples=20 00:26:31.370 iops : min= 242, max= 500, avg=319.20, stdev=69.33, samples=20 00:26:31.370 lat (msec) : 100=2.52%, 250=90.14%, 500=7.34% 00:26:31.370 cpu : usr=0.22%, sys=1.21%, ctx=701, majf=0, minf=4097 00:26:31.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:31.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:31.370 issued rwts: total=3255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:31.370 job5: (groupid=0, jobs=1): err= 0: pid=1448122: Wed Jul 10 14:28:39 2024 00:26:31.370 read: IOPS=339, BW=84.8MiB/s (88.9MB/s)(862MiB/10172msec) 00:26:31.370 slat (usec): min=9, max=147723, avg=2410.69, stdev=8391.13 00:26:31.370 clat (msec): min=2, max=444, avg=186.20, stdev=72.08 00:26:31.370 lat (msec): min=2, max=444, avg=188.61, stdev=73.28 00:26:31.370 clat percentiles (msec): 00:26:31.370 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 62], 20.00th=[ 144], 00:26:31.370 | 30.00th=[ 174], 40.00th=[ 197], 50.00th=[ 209], 60.00th=[ 218], 00:26:31.370 | 70.00th=[ 224], 80.00th=[ 232], 90.00th=[ 245], 95.00th=[ 262], 00:26:31.370 | 99.00th=[ 359], 99.50th=[ 368], 99.90th=[ 426], 99.95th=[ 426], 00:26:31.370 | 99.99th=[ 443] 00:26:31.370 bw ( KiB/s): min=62976, max=187392, per=6.51%, avg=86656.00, stdev=29590.75, samples=20 00:26:31.370 iops : min= 246, max= 732, avg=338.50, stdev=115.59, samples=20 00:26:31.370 lat (msec) : 4=0.38%, 10=2.35%, 20=4.58%, 50=2.23%, 100=2.61% 00:26:31.370 lat (msec) : 250=80.26%, 500=7.60% 00:26:31.370 cpu : usr=0.19%, sys=1.19%, ctx=798, majf=0, minf=3723 00:26:31.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:31.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:31.370 issued rwts: total=3449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:31.370 job6: (groupid=0, jobs=1): err= 0: pid=1448145: Wed Jul 10 14:28:39 2024 00:26:31.370 read: IOPS=452, BW=113MiB/s (119MB/s)(1151MiB/10177msec) 00:26:31.370 slat (usec): min=9, max=180458, avg=1125.08, stdev=6417.85 00:26:31.370 clat (usec): min=1124, max=466664, avg=140295.95, stdev=83295.64 00:26:31.370 lat (usec): min=1149, max=466731, avg=141421.03, stdev=83980.26 00:26:31.370 clat percentiles (msec): 00:26:31.370 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 27], 20.00th=[ 47], 00:26:31.370 | 30.00th=[ 75], 40.00th=[ 110], 50.00th=[ 146], 60.00th=[ 184], 00:26:31.370 | 70.00th=[ 207], 80.00th=[ 222], 90.00th=[ 239], 95.00th=[ 255], 00:26:31.370 | 99.00th=[ 309], 99.50th=[ 321], 99.90th=[ 355], 99.95th=[ 426], 00:26:31.370 | 99.99th=[ 468] 00:26:31.370 bw ( KiB/s): min=67584, max=333824, per=8.73%, avg=116198.40, stdev=61701.39, samples=20 00:26:31.370 iops : min= 264, max= 1304, avg=453.90, stdev=241.02, samples=20 00:26:31.370 lat (msec) : 2=0.07%, 4=0.35%, 10=3.06%, 20=3.52%, 50=15.75% 00:26:31.370 lat (msec) : 100=14.38%, 250=56.98%, 500=5.89% 00:26:31.370 cpu : usr=0.17%, sys=1.19%, ctx=1099, majf=0, minf=4097 00:26:31.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:26:31.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:31.370 issued rwts: total=4603,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:31.370 job7: (groupid=0, jobs=1): err= 0: pid=1448162: Wed Jul 10 14:28:39 2024 00:26:31.370 read: IOPS=361, BW=90.5MiB/s (94.9MB/s)(921MiB/10174msec) 00:26:31.370 slat (usec): min=9, max=194750, avg=2230.51, stdev=9728.68 00:26:31.370 clat (msec): min=3, max=434, avg=174.47, stdev=89.97 00:26:31.370 lat (msec): min=3, max=434, avg=176.71, stdev=91.28 00:26:31.370 clat percentiles (msec): 00:26:31.370 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 14], 20.00th=[ 89], 00:26:31.370 | 30.00th=[ 148], 40.00th=[ 192], 50.00th=[ 207], 60.00th=[ 215], 00:26:31.370 | 70.00th=[ 224], 80.00th=[ 232], 90.00th=[ 253], 95.00th=[ 296], 00:26:31.370 | 99.00th=[ 380], 99.50th=[ 393], 99.90th=[ 418], 99.95th=[ 435], 00:26:31.370 | 99.99th=[ 435] 00:26:31.370 bw ( KiB/s): min=49250, max=167424, per=6.96%, avg=92625.70, stdev=31988.05, samples=20 00:26:31.370 iops : min= 192, max= 654, avg=361.80, stdev=124.98, samples=20 00:26:31.370 lat (msec) : 4=0.52%, 10=7.01%, 20=4.89%, 50=3.83%, 100=6.60% 00:26:31.370 lat (msec) : 250=66.08%, 500=11.08% 00:26:31.370 cpu : usr=0.24%, sys=1.35%, ctx=883, majf=0, minf=4097 00:26:31.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:31.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:31.370 issued rwts: total=3682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:31.370 job8: (groupid=0, jobs=1): err= 0: pid=1448175: Wed Jul 10 14:28:39 2024 00:26:31.370 read: IOPS=615, BW=154MiB/s (161MB/s)(1567MiB/10179msec) 00:26:31.370 slat (usec): min=9, max=163761, avg=1313.21, stdev=5293.86 00:26:31.370 clat (msec): min=4, max=324, avg=102.55, stdev=49.35 00:26:31.370 lat (msec): min=4, max=374, avg=103.87, stdev=49.92 00:26:31.370 clat percentiles (msec): 00:26:31.370 | 1.00th=[ 17], 5.00th=[ 31], 10.00th=[ 42], 20.00th=[ 68], 00:26:31.370 | 30.00th=[ 82], 40.00th=[ 90], 50.00th=[ 99], 60.00th=[ 107], 00:26:31.370 | 70.00th=[ 116], 80.00th=[ 130], 90.00th=[ 157], 95.00th=[ 211], 00:26:31.370 | 99.00th=[ 251], 99.50th=[ 264], 99.90th=[ 313], 99.95th=[ 313], 00:26:31.370 | 99.99th=[ 326] 00:26:31.370 bw ( KiB/s): min=77312, max=250880, per=11.93%, avg=158796.80, stdev=49797.94, samples=20 00:26:31.370 iops : min= 302, max= 980, avg=620.30, stdev=194.52, samples=20 00:26:31.370 lat (msec) : 10=0.35%, 20=1.31%, 50=12.67%, 100=37.82%, 250=46.86% 00:26:31.370 lat (msec) : 500=0.99% 00:26:31.370 cpu : usr=0.34%, sys=1.91%, ctx=1159, majf=0, minf=4097 00:26:31.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:31.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:31.370 issued rwts: total=6267,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:31.370 job9: (groupid=0, jobs=1): err= 0: pid=1448184: Wed Jul 10 14:28:39 2024 00:26:31.370 read: IOPS=380, BW=95.1MiB/s (99.8MB/s)(968MiB/10169msec) 00:26:31.370 slat (usec): min=9, max=121742, avg=1589.54, stdev=7086.38 00:26:31.370 clat (usec): min=1660, max=449343, avg=166447.34, stdev=78145.57 00:26:31.370 lat (usec): min=1681, max=449373, avg=168036.88, stdev=79050.41 00:26:31.370 clat percentiles (msec): 00:26:31.370 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 46], 20.00th=[ 90], 00:26:31.370 | 30.00th=[ 123], 40.00th=[ 165], 50.00th=[ 190], 60.00th=[ 205], 00:26:31.370 | 70.00th=[ 215], 80.00th=[ 226], 90.00th=[ 239], 95.00th=[ 266], 00:26:31.370 | 99.00th=[ 351], 99.50th=[ 430], 99.90th=[ 451], 99.95th=[ 451], 00:26:31.370 | 99.99th=[ 451] 00:26:31.370 bw ( KiB/s): min=58880, max=162816, per=7.32%, avg=97459.20, stdev=25326.52, samples=20 00:26:31.370 iops : min= 230, max= 636, avg=380.70, stdev=98.93, samples=20 00:26:31.370 lat (msec) : 2=0.13%, 4=0.26%, 10=1.09%, 20=2.53%, 50=8.04% 00:26:31.370 lat (msec) : 100=12.04%, 250=69.15%, 500=6.77% 00:26:31.370 cpu : usr=0.16%, sys=1.03%, ctx=985, majf=0, minf=4097 00:26:31.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:31.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:31.370 issued rwts: total=3870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:31.370 job10: (groupid=0, jobs=1): err= 0: pid=1448189: Wed Jul 10 14:28:39 2024 00:26:31.370 read: IOPS=686, BW=172MiB/s (180MB/s)(1749MiB/10181msec) 00:26:31.370 slat (usec): min=9, max=177255, avg=1301.66, stdev=6100.33 00:26:31.370 clat (msec): min=2, max=391, avg=91.77, stdev=67.78 00:26:31.370 lat (msec): min=2, max=423, avg=93.07, stdev=68.71 00:26:31.370 clat percentiles (msec): 00:26:31.370 | 1.00th=[ 8], 5.00th=[ 31], 10.00th=[ 41], 20.00th=[ 48], 00:26:31.370 | 30.00th=[ 51], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 75], 00:26:31.370 | 70.00th=[ 101], 80.00th=[ 142], 90.00th=[ 199], 95.00th=[ 228], 00:26:31.370 | 99.00th=[ 326], 99.50th=[ 347], 99.90th=[ 380], 99.95th=[ 388], 00:26:31.370 | 99.99th=[ 393] 00:26:31.370 bw ( KiB/s): min=64512, max=333824, per=13.33%, avg=177433.60, stdev=91235.93, samples=20 00:26:31.370 iops : min= 252, max= 1304, avg=693.10, stdev=356.39, samples=20 00:26:31.370 lat (msec) : 4=0.04%, 10=1.16%, 20=1.74%, 50=24.76%, 100=42.15% 00:26:31.370 lat (msec) : 250=27.22%, 500=2.92% 00:26:31.370 cpu : usr=0.43%, sys=2.24%, ctx=1291, majf=0, minf=4097 00:26:31.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:31.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:31.370 issued rwts: total=6994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:31.370 00:26:31.370 Run status group 0 (all jobs): 00:26:31.370 READ: bw=1300MiB/s (1363MB/s), 79.7MiB/s-172MiB/s (83.6MB/s-180MB/s), io=13.0GiB (13.9GB), run=10032-10210msec 00:26:31.370 00:26:31.370 Disk stats (read/write): 00:26:31.370 nvme0n1: ios=12060/0, merge=0/0, ticks=1237615/0, in_queue=1237615, util=96.95% 00:26:31.370 nvme10n1: ios=9608/0, merge=0/0, ticks=1235947/0, in_queue=1235947, util=97.18% 00:26:31.370 nvme1n1: ios=10990/0, merge=0/0, ticks=1239643/0, in_queue=1239643, util=97.46% 00:26:31.370 nvme2n1: ios=8311/0, merge=0/0, ticks=1227353/0, in_queue=1227353, util=97.64% 00:26:31.370 nvme3n1: ios=6467/0, merge=0/0, ticks=1249635/0, in_queue=1249635, util=97.79% 00:26:31.370 nvme4n1: ios=6861/0, merge=0/0, ticks=1258314/0, in_queue=1258314, util=98.16% 00:26:31.370 nvme5n1: ios=9148/0, merge=0/0, ticks=1266980/0, in_queue=1266980, util=98.35% 00:26:31.370 nvme6n1: ios=7290/0, merge=0/0, ticks=1241477/0, in_queue=1241477, util=98.46% 00:26:31.370 nvme7n1: ios=12478/0, merge=0/0, ticks=1260854/0, in_queue=1260854, util=98.91% 00:26:31.370 nvme8n1: ios=7729/0, merge=0/0, ticks=1262300/0, in_queue=1262300, util=99.09% 00:26:31.370 nvme9n1: ios=13897/0, merge=0/0, ticks=1243115/0, in_queue=1243115, util=99.23% 00:26:31.370 14:28:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:31.370 [global] 00:26:31.370 thread=1 00:26:31.370 invalidate=1 00:26:31.370 rw=randwrite 00:26:31.370 time_based=1 00:26:31.370 runtime=10 00:26:31.370 ioengine=libaio 00:26:31.370 direct=1 00:26:31.370 bs=262144 00:26:31.370 iodepth=64 00:26:31.370 norandommap=1 00:26:31.370 numjobs=1 00:26:31.370 00:26:31.370 [job0] 00:26:31.370 filename=/dev/nvme0n1 00:26:31.370 [job1] 00:26:31.370 filename=/dev/nvme10n1 00:26:31.370 [job2] 00:26:31.370 filename=/dev/nvme1n1 00:26:31.370 [job3] 00:26:31.370 filename=/dev/nvme2n1 00:26:31.370 [job4] 00:26:31.370 filename=/dev/nvme3n1 00:26:31.370 [job5] 00:26:31.370 filename=/dev/nvme4n1 00:26:31.370 [job6] 00:26:31.370 filename=/dev/nvme5n1 00:26:31.370 [job7] 00:26:31.370 filename=/dev/nvme6n1 00:26:31.370 [job8] 00:26:31.370 filename=/dev/nvme7n1 00:26:31.370 [job9] 00:26:31.370 filename=/dev/nvme8n1 00:26:31.370 [job10] 00:26:31.370 filename=/dev/nvme9n1 00:26:31.370 Could not set queue depth (nvme0n1) 00:26:31.370 Could not set queue depth (nvme10n1) 00:26:31.370 Could not set queue depth (nvme1n1) 00:26:31.370 Could not set queue depth (nvme2n1) 00:26:31.370 Could not set queue depth (nvme3n1) 00:26:31.370 Could not set queue depth (nvme4n1) 00:26:31.370 Could not set queue depth (nvme5n1) 00:26:31.370 Could not set queue depth (nvme6n1) 00:26:31.370 Could not set queue depth (nvme7n1) 00:26:31.370 Could not set queue depth (nvme8n1) 00:26:31.370 Could not set queue depth (nvme9n1) 00:26:31.370 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:31.370 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:31.370 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:31.370 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:31.370 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:31.370 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:31.370 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:31.370 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:31.370 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:31.370 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:31.370 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:31.370 fio-3.35 00:26:31.370 Starting 11 threads 00:26:41.340 00:26:41.340 job0: (groupid=0, jobs=1): err= 0: pid=1449208: Wed Jul 10 14:28:50 2024 00:26:41.340 write: IOPS=301, BW=75.3MiB/s (78.9MB/s)(769MiB/10210msec); 0 zone resets 00:26:41.340 slat (usec): min=19, max=170587, avg=2515.89, stdev=8733.53 00:26:41.340 clat (usec): min=1558, max=1055.9k, avg=209903.79, stdev=170179.94 00:26:41.340 lat (usec): min=1617, max=1055.9k, avg=212419.67, stdev=172710.06 00:26:41.340 clat percentiles (msec): 00:26:41.340 | 1.00th=[ 6], 5.00th=[ 31], 10.00th=[ 47], 20.00th=[ 85], 00:26:41.340 | 30.00th=[ 109], 40.00th=[ 134], 50.00th=[ 176], 60.00th=[ 190], 00:26:41.340 | 70.00th=[ 213], 80.00th=[ 305], 90.00th=[ 489], 95.00th=[ 550], 00:26:41.340 | 99.00th=[ 810], 99.50th=[ 869], 99.90th=[ 986], 99.95th=[ 986], 00:26:41.340 | 99.99th=[ 1053] 00:26:41.340 bw ( KiB/s): min=16384, max=174592, per=7.15%, avg=77085.45, stdev=45484.89, samples=20 00:26:41.340 iops : min= 64, max= 682, avg=301.10, stdev=177.69, samples=20 00:26:41.340 lat (msec) : 2=0.13%, 4=0.42%, 10=1.56%, 20=1.46%, 50=7.45% 00:26:41.340 lat (msec) : 100=15.19%, 250=48.83%, 500=15.81%, 750=7.25%, 1000=1.85% 00:26:41.340 lat (msec) : 2000=0.03% 00:26:41.340 cpu : usr=0.91%, sys=0.83%, ctx=1821, majf=0, minf=1 00:26:41.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:41.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.340 issued rwts: total=0,3074,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.340 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.340 job1: (groupid=0, jobs=1): err= 0: pid=1449220: Wed Jul 10 14:28:50 2024 00:26:41.340 write: IOPS=337, BW=84.4MiB/s (88.5MB/s)(866MiB/10264msec); 0 zone resets 00:26:41.340 slat (usec): min=16, max=231882, avg=1535.84, stdev=6324.29 00:26:41.340 clat (usec): min=1367, max=711862, avg=187887.91, stdev=144996.64 00:26:41.340 lat (usec): min=1405, max=711912, avg=189423.75, stdev=145772.48 00:26:41.340 clat percentiles (msec): 00:26:41.340 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 18], 20.00th=[ 61], 00:26:41.340 | 30.00th=[ 101], 40.00th=[ 123], 50.00th=[ 157], 60.00th=[ 188], 00:26:41.340 | 70.00th=[ 241], 80.00th=[ 309], 90.00th=[ 409], 95.00th=[ 464], 00:26:41.340 | 99.00th=[ 625], 99.50th=[ 667], 99.90th=[ 709], 99.95th=[ 709], 00:26:41.340 | 99.99th=[ 709] 00:26:41.340 bw ( KiB/s): min=31232, max=198144, per=8.07%, avg=87065.60, stdev=46374.43, samples=20 00:26:41.340 iops : min= 122, max= 774, avg=340.10, stdev=181.15, samples=20 00:26:41.340 lat (msec) : 2=0.26%, 4=0.66%, 10=4.94%, 20=5.28%, 50=6.84% 00:26:41.340 lat (msec) : 100=10.85%, 250=42.86%, 500=24.50%, 750=3.81% 00:26:41.340 cpu : usr=1.11%, sys=1.15%, ctx=2289, majf=0, minf=1 00:26:41.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:41.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.340 issued rwts: total=0,3465,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.340 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.340 job2: (groupid=0, jobs=1): err= 0: pid=1449221: Wed Jul 10 14:28:50 2024 00:26:41.340 write: IOPS=439, BW=110MiB/s (115MB/s)(1126MiB/10240msec); 0 zone resets 00:26:41.340 slat (usec): min=15, max=209920, avg=1500.68, stdev=5346.56 00:26:41.340 clat (usec): min=1362, max=631488, avg=143903.23, stdev=105384.50 00:26:41.340 lat (usec): min=1406, max=631527, avg=145403.91, stdev=106437.83 00:26:41.340 clat percentiles (msec): 00:26:41.340 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 27], 20.00th=[ 75], 00:26:41.340 | 30.00th=[ 93], 40.00th=[ 103], 50.00th=[ 130], 60.00th=[ 144], 00:26:41.340 | 70.00th=[ 171], 80.00th=[ 194], 90.00th=[ 271], 95.00th=[ 376], 00:26:41.340 | 99.00th=[ 506], 99.50th=[ 600], 99.90th=[ 625], 99.95th=[ 625], 00:26:41.340 | 99.99th=[ 634] 00:26:41.340 bw ( KiB/s): min=32768, max=213504, per=10.53%, avg=113638.40, stdev=44182.92, samples=20 00:26:41.340 iops : min= 128, max= 834, avg=443.90, stdev=172.59, samples=20 00:26:41.340 lat (msec) : 2=0.16%, 4=0.87%, 10=3.22%, 20=3.18%, 50=9.71% 00:26:41.340 lat (msec) : 100=21.08%, 250=51.11%, 500=9.42%, 750=1.27% 00:26:41.340 cpu : usr=1.04%, sys=1.49%, ctx=2785, majf=0, minf=1 00:26:41.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:41.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.341 issued rwts: total=0,4502,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.341 job3: (groupid=0, jobs=1): err= 0: pid=1449222: Wed Jul 10 14:28:50 2024 00:26:41.341 write: IOPS=537, BW=134MiB/s (141MB/s)(1362MiB/10138msec); 0 zone resets 00:26:41.341 slat (usec): min=17, max=157328, avg=1354.61, stdev=4803.62 00:26:41.341 clat (usec): min=1962, max=518381, avg=117653.75, stdev=95755.01 00:26:41.341 lat (msec): min=2, max=519, avg=119.01, stdev=96.81 00:26:41.341 clat percentiles (msec): 00:26:41.341 | 1.00th=[ 8], 5.00th=[ 21], 10.00th=[ 42], 20.00th=[ 53], 00:26:41.341 | 30.00th=[ 57], 40.00th=[ 82], 50.00th=[ 90], 60.00th=[ 104], 00:26:41.341 | 70.00th=[ 126], 80.00th=[ 159], 90.00th=[ 251], 95.00th=[ 359], 00:26:41.341 | 99.00th=[ 460], 99.50th=[ 477], 99.90th=[ 498], 99.95th=[ 510], 00:26:41.341 | 99.99th=[ 518] 00:26:41.341 bw ( KiB/s): min=40448, max=315904, per=12.78%, avg=137881.60, stdev=81172.52, samples=20 00:26:41.341 iops : min= 158, max= 1234, avg=538.60, stdev=317.08, samples=20 00:26:41.341 lat (msec) : 2=0.02%, 4=0.06%, 10=1.63%, 20=3.01%, 50=10.63% 00:26:41.341 lat (msec) : 100=43.49%, 250=31.12%, 500=9.95%, 750=0.09% 00:26:41.341 cpu : usr=1.51%, sys=1.58%, ctx=2835, majf=0, minf=1 00:26:41.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:41.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.341 issued rwts: total=0,5449,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.341 job4: (groupid=0, jobs=1): err= 0: pid=1449223: Wed Jul 10 14:28:50 2024 00:26:41.341 write: IOPS=255, BW=63.8MiB/s (66.9MB/s)(655MiB/10258msec); 0 zone resets 00:26:41.341 slat (usec): min=18, max=133253, avg=2884.19, stdev=9247.12 00:26:41.341 clat (usec): min=1636, max=844129, avg=247691.83, stdev=174023.85 00:26:41.341 lat (usec): min=1679, max=844203, avg=250576.02, stdev=176493.47 00:26:41.341 clat percentiles (msec): 00:26:41.341 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 30], 20.00th=[ 99], 00:26:41.341 | 30.00th=[ 127], 40.00th=[ 163], 50.00th=[ 211], 60.00th=[ 279], 00:26:41.341 | 70.00th=[ 355], 80.00th=[ 405], 90.00th=[ 472], 95.00th=[ 531], 00:26:41.341 | 99.00th=[ 793], 99.50th=[ 827], 99.90th=[ 844], 99.95th=[ 844], 00:26:41.341 | 99.99th=[ 844] 00:26:41.341 bw ( KiB/s): min=14336, max=171008, per=6.06%, avg=65412.70, stdev=41892.32, samples=20 00:26:41.341 iops : min= 56, max= 668, avg=255.50, stdev=163.65, samples=20 00:26:41.341 lat (msec) : 2=0.11%, 4=0.65%, 10=4.43%, 20=2.41%, 50=4.97% 00:26:41.341 lat (msec) : 100=8.10%, 250=35.60%, 500=36.10%, 750=6.23%, 1000=1.41% 00:26:41.341 cpu : usr=0.77%, sys=0.90%, ctx=1552, majf=0, minf=1 00:26:41.341 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:41.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.341 issued rwts: total=0,2618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.341 job5: (groupid=0, jobs=1): err= 0: pid=1449224: Wed Jul 10 14:28:50 2024 00:26:41.341 write: IOPS=326, BW=81.7MiB/s (85.6MB/s)(828MiB/10134msec); 0 zone resets 00:26:41.341 slat (usec): min=24, max=151766, avg=2722.87, stdev=6439.24 00:26:41.341 clat (msec): min=2, max=556, avg=193.12, stdev=92.18 00:26:41.341 lat (msec): min=2, max=556, avg=195.84, stdev=93.23 00:26:41.341 clat percentiles (msec): 00:26:41.341 | 1.00th=[ 9], 5.00th=[ 58], 10.00th=[ 92], 20.00th=[ 131], 00:26:41.341 | 30.00th=[ 163], 40.00th=[ 176], 50.00th=[ 184], 60.00th=[ 192], 00:26:41.341 | 70.00th=[ 207], 80.00th=[ 232], 90.00th=[ 317], 95.00th=[ 401], 00:26:41.341 | 99.00th=[ 447], 99.50th=[ 502], 99.90th=[ 542], 99.95th=[ 558], 00:26:41.341 | 99.99th=[ 558] 00:26:41.341 bw ( KiB/s): min=38912, max=172032, per=7.71%, avg=83130.60, stdev=29535.41, samples=20 00:26:41.341 iops : min= 152, max= 672, avg=324.70, stdev=115.38, samples=20 00:26:41.341 lat (msec) : 4=0.15%, 10=1.90%, 20=1.30%, 50=1.24%, 100=10.85% 00:26:41.341 lat (msec) : 250=66.37%, 500=17.67%, 750=0.51% 00:26:41.341 cpu : usr=0.85%, sys=1.12%, ctx=1227, majf=0, minf=1 00:26:41.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:41.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.341 issued rwts: total=0,3310,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.341 job6: (groupid=0, jobs=1): err= 0: pid=1449225: Wed Jul 10 14:28:50 2024 00:26:41.341 write: IOPS=416, BW=104MiB/s (109MB/s)(1051MiB/10091msec); 0 zone resets 00:26:41.341 slat (usec): min=15, max=238520, avg=1551.14, stdev=7232.72 00:26:41.341 clat (usec): min=1435, max=802844, avg=152067.91, stdev=152459.75 00:26:41.341 lat (usec): min=1476, max=813219, avg=153619.06, stdev=154448.37 00:26:41.341 clat percentiles (msec): 00:26:41.341 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 17], 20.00th=[ 35], 00:26:41.341 | 30.00th=[ 56], 40.00th=[ 70], 50.00th=[ 100], 60.00th=[ 130], 00:26:41.341 | 70.00th=[ 171], 80.00th=[ 268], 90.00th=[ 393], 95.00th=[ 472], 00:26:41.341 | 99.00th=[ 701], 99.50th=[ 768], 99.90th=[ 793], 99.95th=[ 793], 00:26:41.341 | 99.99th=[ 802] 00:26:41.341 bw ( KiB/s): min=25088, max=340649, per=9.83%, avg=105992.45, stdev=78514.17, samples=20 00:26:41.341 iops : min= 98, max= 1330, avg=414.00, stdev=306.59, samples=20 00:26:41.341 lat (msec) : 2=0.14%, 4=1.05%, 10=4.31%, 20=6.85%, 50=13.80% 00:26:41.341 lat (msec) : 100=24.08%, 250=28.77%, 500=17.33%, 750=3.05%, 1000=0.62% 00:26:41.341 cpu : usr=1.23%, sys=1.31%, ctx=2865, majf=0, minf=1 00:26:41.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:41.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.341 issued rwts: total=0,4202,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.341 job7: (groupid=0, jobs=1): err= 0: pid=1449226: Wed Jul 10 14:28:50 2024 00:26:41.341 write: IOPS=428, BW=107MiB/s (112MB/s)(1079MiB/10064msec); 0 zone resets 00:26:41.341 slat (usec): min=23, max=57455, avg=1884.79, stdev=4364.50 00:26:41.341 clat (msec): min=2, max=425, avg=147.28, stdev=63.81 00:26:41.341 lat (msec): min=2, max=425, avg=149.16, stdev=64.53 00:26:41.341 clat percentiles (msec): 00:26:41.341 | 1.00th=[ 17], 5.00th=[ 54], 10.00th=[ 75], 20.00th=[ 94], 00:26:41.341 | 30.00th=[ 104], 40.00th=[ 116], 50.00th=[ 146], 60.00th=[ 174], 00:26:41.341 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 215], 95.00th=[ 251], 00:26:41.341 | 99.00th=[ 359], 99.50th=[ 393], 99.90th=[ 426], 99.95th=[ 426], 00:26:41.341 | 99.99th=[ 426] 00:26:41.341 bw ( KiB/s): min=47104, max=172544, per=10.09%, avg=108883.95, stdev=32952.75, samples=20 00:26:41.341 iops : min= 184, max= 674, avg=425.30, stdev=128.75, samples=20 00:26:41.341 lat (msec) : 4=0.07%, 10=0.44%, 20=0.72%, 50=2.99%, 100=23.08% 00:26:41.341 lat (msec) : 250=67.68%, 500=5.03% 00:26:41.341 cpu : usr=1.29%, sys=1.30%, ctx=1910, majf=0, minf=1 00:26:41.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:41.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.341 issued rwts: total=0,4316,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.341 job8: (groupid=0, jobs=1): err= 0: pid=1449227: Wed Jul 10 14:28:50 2024 00:26:41.341 write: IOPS=460, BW=115MiB/s (121MB/s)(1163MiB/10092msec); 0 zone resets 00:26:41.341 slat (usec): min=20, max=155294, avg=1435.38, stdev=5777.14 00:26:41.341 clat (msec): min=2, max=632, avg=137.40, stdev=125.88 00:26:41.341 lat (msec): min=3, max=638, avg=138.83, stdev=127.43 00:26:41.341 clat percentiles (msec): 00:26:41.341 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 26], 20.00th=[ 52], 00:26:41.341 | 30.00th=[ 74], 40.00th=[ 87], 50.00th=[ 92], 60.00th=[ 103], 00:26:41.341 | 70.00th=[ 132], 80.00th=[ 207], 90.00th=[ 330], 95.00th=[ 422], 00:26:41.341 | 99.00th=[ 567], 99.50th=[ 609], 99.90th=[ 625], 99.95th=[ 634], 00:26:41.341 | 99.99th=[ 634] 00:26:41.341 bw ( KiB/s): min=34816, max=216576, per=10.89%, avg=117438.60, stdev=59922.66, samples=20 00:26:41.341 iops : min= 136, max= 846, avg=458.70, stdev=234.08, samples=20 00:26:41.341 lat (msec) : 4=0.26%, 10=1.85%, 20=5.08%, 50=12.28%, 100=38.97% 00:26:41.341 lat (msec) : 250=23.91%, 500=15.59%, 750=2.06% 00:26:41.341 cpu : usr=1.39%, sys=1.68%, ctx=2834, majf=0, minf=1 00:26:41.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:26:41.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.341 issued rwts: total=0,4650,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.341 job9: (groupid=0, jobs=1): err= 0: pid=1449228: Wed Jul 10 14:28:50 2024 00:26:41.341 write: IOPS=300, BW=75.0MiB/s (78.7MB/s)(769MiB/10255msec); 0 zone resets 00:26:41.341 slat (usec): min=24, max=248905, avg=2471.51, stdev=9110.85 00:26:41.341 clat (msec): min=2, max=858, avg=210.68, stdev=168.61 00:26:41.341 lat (msec): min=2, max=858, avg=213.15, stdev=170.78 00:26:41.341 clat percentiles (msec): 00:26:41.341 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 33], 20.00th=[ 52], 00:26:41.341 | 30.00th=[ 79], 40.00th=[ 136], 50.00th=[ 184], 60.00th=[ 222], 00:26:41.341 | 70.00th=[ 284], 80.00th=[ 338], 90.00th=[ 426], 95.00th=[ 514], 00:26:41.341 | 99.00th=[ 810], 99.50th=[ 835], 99.90th=[ 852], 99.95th=[ 860], 00:26:41.341 | 99.99th=[ 860] 00:26:41.341 bw ( KiB/s): min=12288, max=268800, per=7.15%, avg=77141.35, stdev=52943.66, samples=20 00:26:41.341 iops : min= 48, max= 1050, avg=301.30, stdev=206.81, samples=20 00:26:41.341 lat (msec) : 4=0.13%, 10=1.82%, 20=1.43%, 50=16.28%, 100=14.01% 00:26:41.341 lat (msec) : 250=30.81%, 500=29.41%, 750=4.39%, 1000=1.72% 00:26:41.341 cpu : usr=0.91%, sys=1.34%, ctx=1848, majf=0, minf=1 00:26:41.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:41.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.341 issued rwts: total=0,3077,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.341 job10: (groupid=0, jobs=1): err= 0: pid=1449229: Wed Jul 10 14:28:50 2024 00:26:41.341 write: IOPS=452, BW=113MiB/s (119MB/s)(1147MiB/10137msec); 0 zone resets 00:26:41.341 slat (usec): min=19, max=144869, avg=1935.10, stdev=5238.40 00:26:41.342 clat (usec): min=1503, max=527961, avg=139408.33, stdev=102211.67 00:26:41.342 lat (usec): min=1547, max=528014, avg=141343.43, stdev=103490.09 00:26:41.342 clat percentiles (msec): 00:26:41.342 | 1.00th=[ 6], 5.00th=[ 22], 10.00th=[ 46], 20.00th=[ 53], 00:26:41.342 | 30.00th=[ 56], 40.00th=[ 92], 50.00th=[ 103], 60.00th=[ 165], 00:26:41.342 | 70.00th=[ 186], 80.00th=[ 209], 90.00th=[ 268], 95.00th=[ 342], 00:26:41.342 | 99.00th=[ 477], 99.50th=[ 489], 99.90th=[ 510], 99.95th=[ 518], 00:26:41.342 | 99.99th=[ 527] 00:26:41.342 bw ( KiB/s): min=38912, max=302592, per=10.74%, avg=115840.00, stdev=77605.25, samples=20 00:26:41.342 iops : min= 152, max= 1182, avg=452.50, stdev=303.15, samples=20 00:26:41.342 lat (msec) : 2=0.09%, 4=0.52%, 10=1.72%, 20=2.20%, 50=7.67% 00:26:41.342 lat (msec) : 100=35.18%, 250=40.10%, 500=12.36%, 750=0.15% 00:26:41.342 cpu : usr=1.39%, sys=1.44%, ctx=1848, majf=0, minf=1 00:26:41.342 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:26:41.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.342 issued rwts: total=0,4588,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.342 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.342 00:26:41.342 Run status group 0 (all jobs): 00:26:41.342 WRITE: bw=1053MiB/s (1105MB/s), 63.8MiB/s-134MiB/s (66.9MB/s-141MB/s), io=10.6GiB (11.3GB), run=10064-10264msec 00:26:41.342 00:26:41.342 Disk stats (read/write): 00:26:41.342 nvme0n1: ios=49/6122, merge=0/0, ticks=72/1238438, in_queue=1238510, util=97.36% 00:26:41.342 nvme10n1: ios=44/6857, merge=0/0, ticks=881/1243366, in_queue=1244247, util=100.00% 00:26:41.342 nvme1n1: ios=44/8960, merge=0/0, ticks=1105/1240232, in_queue=1241337, util=99.92% 00:26:41.342 nvme2n1: ios=41/10712, merge=0/0, ticks=51/1215489, in_queue=1215540, util=97.83% 00:26:41.342 nvme3n1: ios=23/5170, merge=0/0, ticks=40/1236147, in_queue=1236187, util=97.86% 00:26:41.342 nvme4n1: ios=0/6440, merge=0/0, ticks=0/1207230, in_queue=1207230, util=98.09% 00:26:41.342 nvme5n1: ios=0/8173, merge=0/0, ticks=0/1212250, in_queue=1212250, util=98.26% 00:26:41.342 nvme6n1: ios=0/8415, merge=0/0, ticks=0/1215387, in_queue=1215387, util=98.37% 00:26:41.342 nvme7n1: ios=0/8997, merge=0/0, ticks=0/1223725, in_queue=1223725, util=98.76% 00:26:41.342 nvme8n1: ios=0/6093, merge=0/0, ticks=0/1230782, in_queue=1230782, util=98.97% 00:26:41.342 nvme9n1: ios=0/8993, merge=0/0, ticks=0/1207571, in_queue=1207571, util=99.08% 00:26:41.342 14:28:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:41.342 14:28:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:41.342 14:28:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:41.342 14:28:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:41.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:41.342 14:28:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:41.342 14:28:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:41.342 14:28:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:41.342 14:28:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:26:41.342 14:28:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:41.342 14:28:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:26:41.342 14:28:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:41.342 14:28:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:41.342 14:28:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.342 14:28:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:41.342 14:28:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.342 14:28:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:41.342 14:28:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:41.599 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:41.599 14:28:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:41.599 14:28:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:41.599 14:28:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:41.599 14:28:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:26:41.599 14:28:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:41.600 14:28:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:26:41.600 14:28:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:41.600 14:28:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:41.600 14:28:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.600 14:28:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:41.600 14:28:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.600 14:28:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:41.600 14:28:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:41.857 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:41.857 14:28:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:41.857 14:28:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:41.857 14:28:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:41.857 14:28:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:26:41.857 14:28:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:41.857 14:28:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:26:41.857 14:28:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:41.857 14:28:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:41.857 14:28:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.857 14:28:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:41.857 14:28:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.857 14:28:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:41.857 14:28:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:42.421 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:42.421 14:28:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:42.421 14:28:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:42.421 14:28:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:42.421 14:28:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:26:42.421 14:28:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:42.421 14:28:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:26:42.421 14:28:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:42.421 14:28:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:42.421 14:28:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.421 14:28:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.421 14:28:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.421 14:28:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.421 14:28:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:42.985 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:42.985 14:28:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:42.985 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:42.985 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:42.985 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:26:42.985 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:42.985 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:26:42.985 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:42.985 14:28:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:42.985 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.985 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.985 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.986 14:28:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.986 14:28:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:43.243 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:43.243 14:28:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:43.243 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:43.243 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:43.243 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:26:43.243 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:43.243 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:26:43.243 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:43.243 14:28:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:43.243 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.243 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.243 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.243 14:28:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.243 14:28:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:43.500 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:43.500 14:28:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:43.500 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:43.500 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:43.500 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:26:43.500 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:43.500 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:26:43.500 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:43.500 14:28:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:43.500 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.500 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.500 14:28:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.500 14:28:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.500 14:28:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:43.757 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:43.757 14:28:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:43.757 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:43.757 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:43.757 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:26:43.757 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:43.757 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:26:43.757 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:43.757 14:28:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:43.757 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.757 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.757 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.757 14:28:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.757 14:28:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:44.014 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:44.014 14:28:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:44.014 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:44.014 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:44.014 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:26:44.014 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:44.014 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:26:44.014 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:44.014 14:28:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:44.014 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.014 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.014 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.014 14:28:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.014 14:28:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:44.272 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:44.272 14:28:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:44.272 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:44.272 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:44.272 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:26:44.272 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:44.272 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:26:44.272 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:44.272 14:28:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:44.273 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:44.273 14:28:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:44.531 rmmod nvme_tcp 00:26:44.531 rmmod nvme_fabrics 00:26:44.531 rmmod nvme_keyring 00:26:44.531 14:28:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:44.531 14:28:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:26:44.531 14:28:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:26:44.531 14:28:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1443657 ']' 00:26:44.531 14:28:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1443657 00:26:44.531 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 1443657 ']' 00:26:44.531 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 1443657 00:26:44.531 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:26:44.531 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:44.531 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1443657 00:26:44.531 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:44.531 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:44.531 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1443657' 00:26:44.531 killing process with pid 1443657 00:26:44.531 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 1443657 00:26:44.531 14:28:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 1443657 00:26:47.811 14:28:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:47.811 14:28:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:47.811 14:28:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:47.811 14:28:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:47.811 14:28:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:47.811 14:28:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.811 14:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:47.811 14:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.712 14:28:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:49.712 00:26:49.712 real 1m5.597s 00:26:49.712 user 3m37.045s 00:26:49.712 sys 0m22.935s 00:26:49.712 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:49.712 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:49.712 ************************************ 00:26:49.712 END TEST nvmf_multiconnection 00:26:49.712 ************************************ 00:26:49.712 14:28:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:49.712 14:28:58 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:49.712 14:28:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:49.712 14:28:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:49.712 14:28:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:49.712 ************************************ 00:26:49.712 START TEST nvmf_initiator_timeout 00:26:49.712 ************************************ 00:26:49.712 14:28:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:49.712 * Looking for test storage... 00:26:49.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:49.712 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.712 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:49.712 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.712 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.712 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:26:49.713 14:28:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:51.607 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:51.607 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.607 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:51.608 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:51.608 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:51.608 14:29:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.608 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:51.608 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.608 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:51.608 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:51.608 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:51.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:26:51.608 00:26:51.608 --- 10.0.0.2 ping statistics --- 00:26:51.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.608 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:26:51.608 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:51.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:26:51.608 00:26:51.608 --- 10.0.0.1 ping statistics --- 00:26:51.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.608 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:26:51.608 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.608 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:26:51.608 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:51.608 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.608 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:51.608 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:51.608 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.608 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:51.608 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:51.866 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:51.866 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:51.866 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:51.866 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.866 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1452824 00:26:51.866 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:51.866 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1452824 00:26:51.866 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 1452824 ']' 00:26:51.866 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.866 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:51.866 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.866 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:51.866 14:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.866 [2024-07-10 14:29:01.185915] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:26:51.866 [2024-07-10 14:29:01.186051] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.866 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.866 [2024-07-10 14:29:01.328781] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:52.123 [2024-07-10 14:29:01.559172] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:52.123 [2024-07-10 14:29:01.559232] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:52.123 [2024-07-10 14:29:01.559255] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:52.123 [2024-07-10 14:29:01.559272] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:52.123 [2024-07-10 14:29:01.559291] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:52.123 [2024-07-10 14:29:01.559431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.123 [2024-07-10 14:29:01.559493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:52.123 [2024-07-10 14:29:01.559539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.123 [2024-07-10 14:29:01.559548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:52.687 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:52.687 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:26:52.687 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:52.687 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:52.687 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.687 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:52.687 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:52.687 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:52.687 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.687 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.944 Malloc0 00:26:52.944 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.945 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:52.945 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.945 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.945 Delay0 00:26:52.945 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.945 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:52.945 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.945 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.945 [2024-07-10 14:29:02.215567] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.945 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.945 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:52.945 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.945 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.945 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.945 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:52.945 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.945 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.945 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.945 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:52.945 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.945 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.945 [2024-07-10 14:29:02.245039] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.945 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.945 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:53.510 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:53.510 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:26:53.510 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:53.510 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:53.510 14:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:26:55.406 14:29:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:55.406 14:29:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:55.406 14:29:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:55.406 14:29:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:55.406 14:29:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:55.406 14:29:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:26:55.406 14:29:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1453258 00:26:55.406 14:29:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:55.406 14:29:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:55.406 [global] 00:26:55.406 thread=1 00:26:55.406 invalidate=1 00:26:55.406 rw=write 00:26:55.406 time_based=1 00:26:55.406 runtime=60 00:26:55.406 ioengine=libaio 00:26:55.406 direct=1 00:26:55.406 bs=4096 00:26:55.406 iodepth=1 00:26:55.406 norandommap=0 00:26:55.406 numjobs=1 00:26:55.406 00:26:55.664 verify_dump=1 00:26:55.665 verify_backlog=512 00:26:55.665 verify_state_save=0 00:26:55.665 do_verify=1 00:26:55.665 verify=crc32c-intel 00:26:55.665 [job0] 00:26:55.665 filename=/dev/nvme0n1 00:26:55.665 Could not set queue depth (nvme0n1) 00:26:55.665 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:55.665 fio-3.35 00:26:55.665 Starting 1 thread 00:26:58.943 14:29:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:58.943 14:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.943 14:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.943 true 00:26:58.943 14:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.943 14:29:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:58.943 14:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.943 14:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.943 true 00:26:58.943 14:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.943 14:29:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:58.943 14:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.943 14:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.943 true 00:26:58.943 14:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.943 14:29:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:58.943 14:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.943 14:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.943 true 00:26:58.943 14:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.943 14:29:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:01.468 14:29:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:01.468 14:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.468 14:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:01.468 true 00:27:01.468 14:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.468 14:29:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:01.468 14:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.468 14:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:01.468 true 00:27:01.468 14:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.468 14:29:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:01.469 14:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.469 14:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:01.469 true 00:27:01.469 14:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.469 14:29:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:01.469 14:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.469 14:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:01.469 true 00:27:01.469 14:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.469 14:29:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:01.469 14:29:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1453258 00:27:57.786 00:27:57.786 job0: (groupid=0, jobs=1): err= 0: pid=1453445: Wed Jul 10 14:30:05 2024 00:27:57.786 read: IOPS=7, BW=31.2KiB/s (31.9kB/s)(1872KiB/60001msec) 00:27:57.786 slat (nsec): min=12092, max=46658, avg=22166.63, stdev=9059.55 00:27:57.786 clat (usec): min=603, max=40902k, avg=127760.14, stdev=1888817.30 00:27:57.786 lat (usec): min=624, max=40902k, avg=127782.31, stdev=1888817.89 00:27:57.786 clat percentiles (usec): 00:27:57.786 | 1.00th=[ 644], 5.00th=[ 40633], 10.00th=[ 41157], 00:27:57.786 | 20.00th=[ 41157], 30.00th=[ 41157], 40.00th=[ 41157], 00:27:57.786 | 50.00th=[ 41157], 60.00th=[ 41157], 70.00th=[ 41157], 00:27:57.786 | 80.00th=[ 41157], 90.00th=[ 42206], 95.00th=[ 42206], 00:27:57.786 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[17112761], 00:27:57.786 | 99.95th=[17112761], 99.99th=[17112761] 00:27:57.786 write: IOPS=8, BW=34.1KiB/s (35.0kB/s)(2048KiB/60001msec); 0 zone resets 00:27:57.786 slat (nsec): min=6475, max=41916, avg=12146.49, stdev=4177.30 00:27:57.786 clat (usec): min=260, max=555, avg=365.07, stdev=63.14 00:27:57.786 lat (usec): min=269, max=568, avg=377.21, stdev=64.55 00:27:57.786 clat percentiles (usec): 00:27:57.786 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 289], 00:27:57.786 | 30.00th=[ 302], 40.00th=[ 367], 50.00th=[ 379], 60.00th=[ 396], 00:27:57.786 | 70.00th=[ 408], 80.00th=[ 420], 90.00th=[ 433], 95.00th=[ 457], 00:27:57.786 | 99.00th=[ 490], 99.50th=[ 510], 99.90th=[ 553], 99.95th=[ 553], 00:27:57.786 | 99.99th=[ 553] 00:27:57.786 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:27:57.786 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:27:57.786 lat (usec) : 500=51.73%, 750=1.33% 00:27:57.786 lat (msec) : 50=46.84%, >=2000=0.10% 00:27:57.786 cpu : usr=0.03%, sys=0.03%, ctx=980, majf=0, minf=2 00:27:57.786 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.786 issued rwts: total=468,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.786 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:57.786 00:27:57.786 Run status group 0 (all jobs): 00:27:57.786 READ: bw=31.2KiB/s (31.9kB/s), 31.2KiB/s-31.2KiB/s (31.9kB/s-31.9kB/s), io=1872KiB (1917kB), run=60001-60001msec 00:27:57.786 WRITE: bw=34.1KiB/s (35.0kB/s), 34.1KiB/s-34.1KiB/s (35.0kB/s-35.0kB/s), io=2048KiB (2097kB), run=60001-60001msec 00:27:57.786 00:27:57.786 Disk stats (read/write): 00:27:57.786 nvme0n1: ios=564/512, merge=0/0, ticks=18901/190, in_queue=19091, util=99.59% 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:57.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:57.786 nvmf hotplug test: fio successful as expected 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:57.786 rmmod nvme_tcp 00:27:57.786 rmmod nvme_fabrics 00:27:57.786 rmmod nvme_keyring 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1452824 ']' 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1452824 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 1452824 ']' 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 1452824 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1452824 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1452824' 00:27:57.786 killing process with pid 1452824 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 1452824 00:27:57.786 14:30:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 1452824 00:27:57.786 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:57.786 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:57.786 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:57.786 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:57.786 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:57.786 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.786 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:57.786 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.685 14:30:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:59.685 00:27:59.685 real 1m10.074s 00:27:59.685 user 4m15.914s 00:27:59.685 sys 0m6.695s 00:27:59.685 14:30:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:59.685 14:30:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:59.685 ************************************ 00:27:59.685 END TEST nvmf_initiator_timeout 00:27:59.685 ************************************ 00:27:59.685 14:30:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:59.685 14:30:09 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:27:59.685 14:30:09 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:27:59.685 14:30:09 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:27:59.685 14:30:09 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:27:59.685 14:30:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:01.584 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:01.584 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:01.584 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.584 14:30:11 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:01.585 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:01.585 14:30:11 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.585 14:30:11 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:01.585 14:30:11 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:01.585 14:30:11 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:28:01.585 14:30:11 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:01.585 14:30:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:01.585 14:30:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:01.585 14:30:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:01.585 ************************************ 00:28:01.585 START TEST nvmf_perf_adq 00:28:01.585 ************************************ 00:28:01.585 14:30:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:01.843 * Looking for test storage... 00:28:01.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:01.843 14:30:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:03.743 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:03.743 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:03.743 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:03.743 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:28:03.743 14:30:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:28:04.676 14:30:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:28:06.577 14:30:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:11.848 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:11.848 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:11.848 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:11.848 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:11.849 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:11.849 14:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:11.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:11.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:28:11.849 00:28:11.849 --- 10.0.0.2 ping statistics --- 00:28:11.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.849 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:11.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:11.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:28:11.849 00:28:11.849 --- 10.0.0.1 ping statistics --- 00:28:11.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.849 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1465715 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1465715 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1465715 ']' 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:11.849 14:30:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:11.849 [2024-07-10 14:30:21.123506] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:28:11.849 [2024-07-10 14:30:21.123650] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.849 EAL: No free 2048 kB hugepages reported on node 1 00:28:11.849 [2024-07-10 14:30:21.253996] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:12.107 [2024-07-10 14:30:21.476980] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.107 [2024-07-10 14:30:21.477044] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.107 [2024-07-10 14:30:21.477066] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.107 [2024-07-10 14:30:21.477083] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.107 [2024-07-10 14:30:21.477100] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.107 [2024-07-10 14:30:21.477200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.107 [2024-07-10 14:30:21.477244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:12.107 [2024-07-10 14:30:21.477299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.107 [2024-07-10 14:30:21.477311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:12.673 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:12.673 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:28:12.673 14:30:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:12.673 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:12.673 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.673 14:30:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.673 14:30:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:28:12.673 14:30:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:12.673 14:30:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:12.673 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.673 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.673 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.673 14:30:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:12.673 14:30:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:12.673 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.673 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.673 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.673 14:30:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:12.673 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.673 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.239 [2024-07-10 14:30:22.495698] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.239 Malloc1 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.239 [2024-07-10 14:30:22.600920] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1465882 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:28:13.239 14:30:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:13.239 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.136 14:30:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:28:15.136 14:30:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.136 14:30:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.394 14:30:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.394 14:30:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:28:15.394 "tick_rate": 2700000000, 00:28:15.394 "poll_groups": [ 00:28:15.394 { 00:28:15.394 "name": "nvmf_tgt_poll_group_000", 00:28:15.394 "admin_qpairs": 1, 00:28:15.394 "io_qpairs": 1, 00:28:15.394 "current_admin_qpairs": 1, 00:28:15.394 "current_io_qpairs": 1, 00:28:15.394 "pending_bdev_io": 0, 00:28:15.394 "completed_nvme_io": 17189, 00:28:15.394 "transports": [ 00:28:15.394 { 00:28:15.394 "trtype": "TCP" 00:28:15.394 } 00:28:15.394 ] 00:28:15.394 }, 00:28:15.394 { 00:28:15.394 "name": "nvmf_tgt_poll_group_001", 00:28:15.394 "admin_qpairs": 0, 00:28:15.394 "io_qpairs": 1, 00:28:15.394 "current_admin_qpairs": 0, 00:28:15.394 "current_io_qpairs": 1, 00:28:15.394 "pending_bdev_io": 0, 00:28:15.394 "completed_nvme_io": 16850, 00:28:15.394 "transports": [ 00:28:15.394 { 00:28:15.394 "trtype": "TCP" 00:28:15.394 } 00:28:15.394 ] 00:28:15.394 }, 00:28:15.394 { 00:28:15.394 "name": "nvmf_tgt_poll_group_002", 00:28:15.394 "admin_qpairs": 0, 00:28:15.394 "io_qpairs": 1, 00:28:15.394 "current_admin_qpairs": 0, 00:28:15.394 "current_io_qpairs": 1, 00:28:15.394 "pending_bdev_io": 0, 00:28:15.394 "completed_nvme_io": 16449, 00:28:15.394 "transports": [ 00:28:15.394 { 00:28:15.394 "trtype": "TCP" 00:28:15.394 } 00:28:15.394 ] 00:28:15.394 }, 00:28:15.394 { 00:28:15.394 "name": "nvmf_tgt_poll_group_003", 00:28:15.394 "admin_qpairs": 0, 00:28:15.394 "io_qpairs": 1, 00:28:15.394 "current_admin_qpairs": 0, 00:28:15.394 "current_io_qpairs": 1, 00:28:15.394 "pending_bdev_io": 0, 00:28:15.394 "completed_nvme_io": 17042, 00:28:15.394 "transports": [ 00:28:15.394 { 00:28:15.394 "trtype": "TCP" 00:28:15.394 } 00:28:15.394 ] 00:28:15.394 } 00:28:15.394 ] 00:28:15.394 }' 00:28:15.394 14:30:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:15.394 14:30:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:28:15.394 14:30:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:28:15.394 14:30:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:28:15.394 14:30:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1465882 00:28:23.503 Initializing NVMe Controllers 00:28:23.503 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:23.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:23.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:23.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:23.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:23.503 Initialization complete. Launching workers. 00:28:23.503 ======================================================== 00:28:23.503 Latency(us) 00:28:23.503 Device Information : IOPS MiB/s Average min max 00:28:23.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9292.80 36.30 6887.29 6253.48 9208.94 00:28:23.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9152.30 35.75 6994.73 2481.78 10633.70 00:28:23.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9002.80 35.17 7109.35 2162.72 11664.63 00:28:23.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9366.70 36.59 6833.96 1897.05 9416.10 00:28:23.503 ======================================================== 00:28:23.503 Total : 36814.59 143.81 6954.74 1897.05 11664.63 00:28:23.503 00:28:23.503 14:30:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:28:23.503 14:30:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:23.503 14:30:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:28:23.503 14:30:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:23.503 14:30:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:28:23.503 14:30:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:23.503 14:30:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:23.503 rmmod nvme_tcp 00:28:23.503 rmmod nvme_fabrics 00:28:23.503 rmmod nvme_keyring 00:28:23.503 14:30:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:23.503 14:30:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:28:23.503 14:30:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:28:23.504 14:30:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1465715 ']' 00:28:23.504 14:30:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1465715 00:28:23.504 14:30:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1465715 ']' 00:28:23.504 14:30:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1465715 00:28:23.504 14:30:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:28:23.504 14:30:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:23.504 14:30:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1465715 00:28:23.504 14:30:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:23.504 14:30:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:23.504 14:30:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1465715' 00:28:23.504 killing process with pid 1465715 00:28:23.504 14:30:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1465715 00:28:23.504 14:30:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1465715 00:28:24.886 14:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:24.886 14:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:24.886 14:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:24.886 14:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:24.886 14:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:24.886 14:30:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.886 14:30:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:24.886 14:30:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.418 14:30:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:27.418 14:30:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:28:27.418 14:30:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:28:27.676 14:30:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:28:29.574 14:30:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:34.893 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:34.894 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:34.894 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:34.894 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:34.894 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:34.894 14:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:34.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:28:34.894 00:28:34.894 --- 10.0.0.2 ping statistics --- 00:28:34.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.894 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:28:34.894 00:28:34.894 --- 10.0.0.1 ping statistics --- 00:28:34.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.894 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:34.894 net.core.busy_poll = 1 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:34.894 net.core.busy_read = 1 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1468618 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1468618 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1468618 ']' 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:34.894 14:30:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.153 [2024-07-10 14:30:44.382179] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:28:35.153 [2024-07-10 14:30:44.382329] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:35.153 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.153 [2024-07-10 14:30:44.520306] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:35.410 [2024-07-10 14:30:44.782500] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:35.410 [2024-07-10 14:30:44.782582] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:35.410 [2024-07-10 14:30:44.782616] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:35.410 [2024-07-10 14:30:44.782639] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:35.410 [2024-07-10 14:30:44.782664] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:35.410 [2024-07-10 14:30:44.782798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.410 [2024-07-10 14:30:44.782860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:35.410 [2024-07-10 14:30:44.782942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.410 [2024-07-10 14:30:44.782953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:35.976 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:35.976 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:28:35.976 14:30:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:35.976 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:35.976 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.976 14:30:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.976 14:30:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:28:35.976 14:30:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:35.976 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.976 14:30:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:35.976 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.976 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.976 14:30:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:35.976 14:30:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:35.976 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.976 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.976 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.976 14:30:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:35.976 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.976 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.234 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.234 14:30:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:36.234 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.234 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.234 [2024-07-10 14:30:45.687237] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:36.234 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.234 14:30:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:36.234 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.234 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.492 Malloc1 00:28:36.492 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.492 14:30:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:36.492 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.492 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.492 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.492 14:30:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:36.492 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.492 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.492 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.492 14:30:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:36.492 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.492 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.492 [2024-07-10 14:30:45.789378] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.492 14:30:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.492 14:30:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1468898 00:28:36.492 14:30:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:36.492 14:30:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:28:36.492 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.491 14:30:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:28:38.491 14:30:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.491 14:30:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.491 14:30:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.491 14:30:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:28:38.491 "tick_rate": 2700000000, 00:28:38.491 "poll_groups": [ 00:28:38.491 { 00:28:38.491 "name": "nvmf_tgt_poll_group_000", 00:28:38.491 "admin_qpairs": 1, 00:28:38.491 "io_qpairs": 1, 00:28:38.491 "current_admin_qpairs": 1, 00:28:38.491 "current_io_qpairs": 1, 00:28:38.491 "pending_bdev_io": 0, 00:28:38.491 "completed_nvme_io": 16065, 00:28:38.491 "transports": [ 00:28:38.491 { 00:28:38.491 "trtype": "TCP" 00:28:38.491 } 00:28:38.491 ] 00:28:38.491 }, 00:28:38.491 { 00:28:38.491 "name": "nvmf_tgt_poll_group_001", 00:28:38.491 "admin_qpairs": 0, 00:28:38.491 "io_qpairs": 3, 00:28:38.491 "current_admin_qpairs": 0, 00:28:38.491 "current_io_qpairs": 3, 00:28:38.491 "pending_bdev_io": 0, 00:28:38.491 "completed_nvme_io": 21033, 00:28:38.491 "transports": [ 00:28:38.491 { 00:28:38.491 "trtype": "TCP" 00:28:38.491 } 00:28:38.491 ] 00:28:38.491 }, 00:28:38.491 { 00:28:38.491 "name": "nvmf_tgt_poll_group_002", 00:28:38.491 "admin_qpairs": 0, 00:28:38.491 "io_qpairs": 0, 00:28:38.491 "current_admin_qpairs": 0, 00:28:38.491 "current_io_qpairs": 0, 00:28:38.491 "pending_bdev_io": 0, 00:28:38.491 "completed_nvme_io": 0, 00:28:38.491 "transports": [ 00:28:38.491 { 00:28:38.491 "trtype": "TCP" 00:28:38.491 } 00:28:38.491 ] 00:28:38.491 }, 00:28:38.491 { 00:28:38.491 "name": "nvmf_tgt_poll_group_003", 00:28:38.491 "admin_qpairs": 0, 00:28:38.491 "io_qpairs": 0, 00:28:38.491 "current_admin_qpairs": 0, 00:28:38.491 "current_io_qpairs": 0, 00:28:38.491 "pending_bdev_io": 0, 00:28:38.491 "completed_nvme_io": 0, 00:28:38.491 "transports": [ 00:28:38.491 { 00:28:38.491 "trtype": "TCP" 00:28:38.491 } 00:28:38.491 ] 00:28:38.491 } 00:28:38.491 ] 00:28:38.491 }' 00:28:38.491 14:30:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:38.491 14:30:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:28:38.491 14:30:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:28:38.492 14:30:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:28:38.492 14:30:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1468898 00:28:46.599 Initializing NVMe Controllers 00:28:46.599 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:46.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:46.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:46.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:46.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:46.599 Initialization complete. Launching workers. 00:28:46.599 ======================================================== 00:28:46.599 Latency(us) 00:28:46.599 Device Information : IOPS MiB/s Average min max 00:28:46.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8898.50 34.76 7196.17 3172.98 10469.57 00:28:46.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4052.70 15.83 15796.75 2385.34 65208.36 00:28:46.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 3721.40 14.54 17202.18 3246.75 66385.92 00:28:46.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 3621.90 14.15 17673.86 3711.46 66070.97 00:28:46.599 ======================================================== 00:28:46.599 Total : 20294.50 79.28 12618.38 2385.34 66385.92 00:28:46.599 00:28:46.599 14:30:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:28:46.599 14:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:46.599 14:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:28:46.599 14:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:46.599 14:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:28:46.599 14:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:46.599 14:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:46.599 rmmod nvme_tcp 00:28:46.599 rmmod nvme_fabrics 00:28:46.599 rmmod nvme_keyring 00:28:46.857 14:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:46.857 14:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:28:46.857 14:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:28:46.857 14:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1468618 ']' 00:28:46.857 14:30:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1468618 00:28:46.857 14:30:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1468618 ']' 00:28:46.857 14:30:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1468618 00:28:46.857 14:30:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:28:46.857 14:30:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:46.857 14:30:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1468618 00:28:46.857 14:30:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:46.857 14:30:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:46.857 14:30:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1468618' 00:28:46.857 killing process with pid 1468618 00:28:46.857 14:30:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1468618 00:28:46.857 14:30:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1468618 00:28:48.231 14:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:48.231 14:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:48.231 14:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:48.231 14:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:48.231 14:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:48.231 14:30:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.231 14:30:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:48.231 14:30:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.131 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:50.132 14:30:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:50.132 00:28:50.132 real 0m48.566s 00:28:50.132 user 2m47.034s 00:28:50.132 sys 0m11.939s 00:28:50.132 14:30:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:50.132 14:30:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:50.132 ************************************ 00:28:50.132 END TEST nvmf_perf_adq 00:28:50.132 ************************************ 00:28:50.390 14:30:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:50.390 14:30:59 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:50.390 14:30:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:50.390 14:30:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:50.390 14:30:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:50.390 ************************************ 00:28:50.390 START TEST nvmf_shutdown 00:28:50.390 ************************************ 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:50.390 * Looking for test storage... 00:28:50.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.390 14:30:59 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:50.391 ************************************ 00:28:50.391 START TEST nvmf_shutdown_tc1 00:28:50.391 ************************************ 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:50.391 14:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:52.289 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:52.289 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:52.289 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:52.289 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:52.289 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:52.290 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:52.290 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:52.290 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:52.290 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:52.290 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:52.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:52.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:28:52.548 00:28:52.548 --- 10.0.0.2 ping statistics --- 00:28:52.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.548 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:52.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:52.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:28:52.548 00:28:52.548 --- 10.0.0.1 ping statistics --- 00:28:52.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.548 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1472189 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1472189 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1472189 ']' 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:52.548 14:31:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:52.548 [2024-07-10 14:31:01.981186] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:28:52.548 [2024-07-10 14:31:01.981348] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.806 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.806 [2024-07-10 14:31:02.129702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:53.064 [2024-07-10 14:31:02.391445] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:53.064 [2024-07-10 14:31:02.391521] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:53.064 [2024-07-10 14:31:02.391549] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:53.064 [2024-07-10 14:31:02.391570] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:53.064 [2024-07-10 14:31:02.391592] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:53.064 [2024-07-10 14:31:02.391735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:53.064 [2024-07-10 14:31:02.391819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:53.064 [2024-07-10 14:31:02.391872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.064 [2024-07-10 14:31:02.391883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:53.634 [2024-07-10 14:31:02.935834] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.634 14:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:53.634 Malloc1 00:28:53.634 [2024-07-10 14:31:03.061442] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.892 Malloc2 00:28:53.892 Malloc3 00:28:53.892 Malloc4 00:28:54.149 Malloc5 00:28:54.149 Malloc6 00:28:54.149 Malloc7 00:28:54.407 Malloc8 00:28:54.407 Malloc9 00:28:54.665 Malloc10 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1472500 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1472500 /var/tmp/bdevperf.sock 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1472500 ']' 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:54.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.665 { 00:28:54.665 "params": { 00:28:54.665 "name": "Nvme$subsystem", 00:28:54.665 "trtype": "$TEST_TRANSPORT", 00:28:54.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.665 "adrfam": "ipv4", 00:28:54.665 "trsvcid": "$NVMF_PORT", 00:28:54.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.665 "hdgst": ${hdgst:-false}, 00:28:54.665 "ddgst": ${ddgst:-false} 00:28:54.665 }, 00:28:54.665 "method": "bdev_nvme_attach_controller" 00:28:54.665 } 00:28:54.665 EOF 00:28:54.665 )") 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.665 { 00:28:54.665 "params": { 00:28:54.665 "name": "Nvme$subsystem", 00:28:54.665 "trtype": "$TEST_TRANSPORT", 00:28:54.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.665 "adrfam": "ipv4", 00:28:54.665 "trsvcid": "$NVMF_PORT", 00:28:54.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.665 "hdgst": ${hdgst:-false}, 00:28:54.665 "ddgst": ${ddgst:-false} 00:28:54.665 }, 00:28:54.665 "method": "bdev_nvme_attach_controller" 00:28:54.665 } 00:28:54.665 EOF 00:28:54.665 )") 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.665 { 00:28:54.665 "params": { 00:28:54.665 "name": "Nvme$subsystem", 00:28:54.665 "trtype": "$TEST_TRANSPORT", 00:28:54.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.665 "adrfam": "ipv4", 00:28:54.665 "trsvcid": "$NVMF_PORT", 00:28:54.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.665 "hdgst": ${hdgst:-false}, 00:28:54.665 "ddgst": ${ddgst:-false} 00:28:54.665 }, 00:28:54.665 "method": "bdev_nvme_attach_controller" 00:28:54.665 } 00:28:54.665 EOF 00:28:54.665 )") 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.665 { 00:28:54.665 "params": { 00:28:54.665 "name": "Nvme$subsystem", 00:28:54.665 "trtype": "$TEST_TRANSPORT", 00:28:54.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.665 "adrfam": "ipv4", 00:28:54.665 "trsvcid": "$NVMF_PORT", 00:28:54.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.665 "hdgst": ${hdgst:-false}, 00:28:54.665 "ddgst": ${ddgst:-false} 00:28:54.665 }, 00:28:54.665 "method": "bdev_nvme_attach_controller" 00:28:54.665 } 00:28:54.665 EOF 00:28:54.665 )") 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.665 { 00:28:54.665 "params": { 00:28:54.665 "name": "Nvme$subsystem", 00:28:54.665 "trtype": "$TEST_TRANSPORT", 00:28:54.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.665 "adrfam": "ipv4", 00:28:54.665 "trsvcid": "$NVMF_PORT", 00:28:54.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.665 "hdgst": ${hdgst:-false}, 00:28:54.665 "ddgst": ${ddgst:-false} 00:28:54.665 }, 00:28:54.665 "method": "bdev_nvme_attach_controller" 00:28:54.665 } 00:28:54.665 EOF 00:28:54.665 )") 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.665 { 00:28:54.665 "params": { 00:28:54.665 "name": "Nvme$subsystem", 00:28:54.665 "trtype": "$TEST_TRANSPORT", 00:28:54.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.665 "adrfam": "ipv4", 00:28:54.665 "trsvcid": "$NVMF_PORT", 00:28:54.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.665 "hdgst": ${hdgst:-false}, 00:28:54.665 "ddgst": ${ddgst:-false} 00:28:54.665 }, 00:28:54.665 "method": "bdev_nvme_attach_controller" 00:28:54.665 } 00:28:54.665 EOF 00:28:54.665 )") 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.665 { 00:28:54.665 "params": { 00:28:54.665 "name": "Nvme$subsystem", 00:28:54.665 "trtype": "$TEST_TRANSPORT", 00:28:54.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.665 "adrfam": "ipv4", 00:28:54.665 "trsvcid": "$NVMF_PORT", 00:28:54.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.665 "hdgst": ${hdgst:-false}, 00:28:54.665 "ddgst": ${ddgst:-false} 00:28:54.665 }, 00:28:54.665 "method": "bdev_nvme_attach_controller" 00:28:54.665 } 00:28:54.665 EOF 00:28:54.665 )") 00:28:54.665 14:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:54.665 14:31:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.665 14:31:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.665 { 00:28:54.665 "params": { 00:28:54.665 "name": "Nvme$subsystem", 00:28:54.665 "trtype": "$TEST_TRANSPORT", 00:28:54.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.665 "adrfam": "ipv4", 00:28:54.665 "trsvcid": "$NVMF_PORT", 00:28:54.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.665 "hdgst": ${hdgst:-false}, 00:28:54.665 "ddgst": ${ddgst:-false} 00:28:54.665 }, 00:28:54.665 "method": "bdev_nvme_attach_controller" 00:28:54.665 } 00:28:54.665 EOF 00:28:54.665 )") 00:28:54.665 14:31:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:54.665 14:31:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.665 14:31:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.665 { 00:28:54.665 "params": { 00:28:54.665 "name": "Nvme$subsystem", 00:28:54.665 "trtype": "$TEST_TRANSPORT", 00:28:54.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.665 "adrfam": "ipv4", 00:28:54.665 "trsvcid": "$NVMF_PORT", 00:28:54.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.665 "hdgst": ${hdgst:-false}, 00:28:54.665 "ddgst": ${ddgst:-false} 00:28:54.665 }, 00:28:54.665 "method": "bdev_nvme_attach_controller" 00:28:54.665 } 00:28:54.665 EOF 00:28:54.665 )") 00:28:54.665 14:31:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:54.665 14:31:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.665 14:31:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.665 { 00:28:54.665 "params": { 00:28:54.665 "name": "Nvme$subsystem", 00:28:54.665 "trtype": "$TEST_TRANSPORT", 00:28:54.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.665 "adrfam": "ipv4", 00:28:54.666 "trsvcid": "$NVMF_PORT", 00:28:54.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.666 "hdgst": ${hdgst:-false}, 00:28:54.666 "ddgst": ${ddgst:-false} 00:28:54.666 }, 00:28:54.666 "method": "bdev_nvme_attach_controller" 00:28:54.666 } 00:28:54.666 EOF 00:28:54.666 )") 00:28:54.666 14:31:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:54.666 14:31:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:54.666 14:31:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:54.666 14:31:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:54.666 "params": { 00:28:54.666 "name": "Nvme1", 00:28:54.666 "trtype": "tcp", 00:28:54.666 "traddr": "10.0.0.2", 00:28:54.666 "adrfam": "ipv4", 00:28:54.666 "trsvcid": "4420", 00:28:54.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:54.666 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:54.666 "hdgst": false, 00:28:54.666 "ddgst": false 00:28:54.666 }, 00:28:54.666 "method": "bdev_nvme_attach_controller" 00:28:54.666 },{ 00:28:54.666 "params": { 00:28:54.666 "name": "Nvme2", 00:28:54.666 "trtype": "tcp", 00:28:54.666 "traddr": "10.0.0.2", 00:28:54.666 "adrfam": "ipv4", 00:28:54.666 "trsvcid": "4420", 00:28:54.666 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:54.666 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:54.666 "hdgst": false, 00:28:54.666 "ddgst": false 00:28:54.666 }, 00:28:54.666 "method": "bdev_nvme_attach_controller" 00:28:54.666 },{ 00:28:54.666 "params": { 00:28:54.666 "name": "Nvme3", 00:28:54.666 "trtype": "tcp", 00:28:54.666 "traddr": "10.0.0.2", 00:28:54.666 "adrfam": "ipv4", 00:28:54.666 "trsvcid": "4420", 00:28:54.666 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:54.666 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:54.666 "hdgst": false, 00:28:54.666 "ddgst": false 00:28:54.666 }, 00:28:54.666 "method": "bdev_nvme_attach_controller" 00:28:54.666 },{ 00:28:54.666 "params": { 00:28:54.666 "name": "Nvme4", 00:28:54.666 "trtype": "tcp", 00:28:54.666 "traddr": "10.0.0.2", 00:28:54.666 "adrfam": "ipv4", 00:28:54.666 "trsvcid": "4420", 00:28:54.666 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:54.666 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:54.666 "hdgst": false, 00:28:54.666 "ddgst": false 00:28:54.666 }, 00:28:54.666 "method": "bdev_nvme_attach_controller" 00:28:54.666 },{ 00:28:54.666 "params": { 00:28:54.666 "name": "Nvme5", 00:28:54.666 "trtype": "tcp", 00:28:54.666 "traddr": "10.0.0.2", 00:28:54.666 "adrfam": "ipv4", 00:28:54.666 "trsvcid": "4420", 00:28:54.666 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:54.666 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:54.666 "hdgst": false, 00:28:54.666 "ddgst": false 00:28:54.666 }, 00:28:54.666 "method": "bdev_nvme_attach_controller" 00:28:54.666 },{ 00:28:54.666 "params": { 00:28:54.666 "name": "Nvme6", 00:28:54.666 "trtype": "tcp", 00:28:54.666 "traddr": "10.0.0.2", 00:28:54.666 "adrfam": "ipv4", 00:28:54.666 "trsvcid": "4420", 00:28:54.666 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:54.666 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:54.666 "hdgst": false, 00:28:54.666 "ddgst": false 00:28:54.666 }, 00:28:54.666 "method": "bdev_nvme_attach_controller" 00:28:54.666 },{ 00:28:54.666 "params": { 00:28:54.666 "name": "Nvme7", 00:28:54.666 "trtype": "tcp", 00:28:54.666 "traddr": "10.0.0.2", 00:28:54.666 "adrfam": "ipv4", 00:28:54.666 "trsvcid": "4420", 00:28:54.666 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:54.666 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:54.666 "hdgst": false, 00:28:54.666 "ddgst": false 00:28:54.666 }, 00:28:54.666 "method": "bdev_nvme_attach_controller" 00:28:54.666 },{ 00:28:54.666 "params": { 00:28:54.666 "name": "Nvme8", 00:28:54.666 "trtype": "tcp", 00:28:54.666 "traddr": "10.0.0.2", 00:28:54.666 "adrfam": "ipv4", 00:28:54.666 "trsvcid": "4420", 00:28:54.666 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:54.666 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:54.666 "hdgst": false, 00:28:54.666 "ddgst": false 00:28:54.666 }, 00:28:54.666 "method": "bdev_nvme_attach_controller" 00:28:54.666 },{ 00:28:54.666 "params": { 00:28:54.666 "name": "Nvme9", 00:28:54.666 "trtype": "tcp", 00:28:54.666 "traddr": "10.0.0.2", 00:28:54.666 "adrfam": "ipv4", 00:28:54.666 "trsvcid": "4420", 00:28:54.666 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:54.666 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:54.666 "hdgst": false, 00:28:54.666 "ddgst": false 00:28:54.666 }, 00:28:54.666 "method": "bdev_nvme_attach_controller" 00:28:54.666 },{ 00:28:54.666 "params": { 00:28:54.666 "name": "Nvme10", 00:28:54.666 "trtype": "tcp", 00:28:54.666 "traddr": "10.0.0.2", 00:28:54.666 "adrfam": "ipv4", 00:28:54.666 "trsvcid": "4420", 00:28:54.666 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:54.666 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:54.666 "hdgst": false, 00:28:54.666 "ddgst": false 00:28:54.666 }, 00:28:54.666 "method": "bdev_nvme_attach_controller" 00:28:54.666 }' 00:28:54.666 [2024-07-10 14:31:04.059319] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:28:54.666 [2024-07-10 14:31:04.059486] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:54.666 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.924 [2024-07-10 14:31:04.186473] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.182 [2024-07-10 14:31:04.427728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.719 14:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:57.719 14:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:28:57.719 14:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:57.720 14:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.720 14:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:57.720 14:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.720 14:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1472500 00:28:57.720 14:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:28:57.720 14:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:28:58.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1472500 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:58.285 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1472189 00:28:58.285 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:58.285 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:58.285 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:58.285 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:58.285 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:58.285 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:58.285 { 00:28:58.285 "params": { 00:28:58.285 "name": "Nvme$subsystem", 00:28:58.285 "trtype": "$TEST_TRANSPORT", 00:28:58.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.285 "adrfam": "ipv4", 00:28:58.285 "trsvcid": "$NVMF_PORT", 00:28:58.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.285 "hdgst": ${hdgst:-false}, 00:28:58.285 "ddgst": ${ddgst:-false} 00:28:58.285 }, 00:28:58.285 "method": "bdev_nvme_attach_controller" 00:28:58.285 } 00:28:58.285 EOF 00:28:58.285 )") 00:28:58.285 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:58.544 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:58.544 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:58.544 { 00:28:58.544 "params": { 00:28:58.544 "name": "Nvme$subsystem", 00:28:58.544 "trtype": "$TEST_TRANSPORT", 00:28:58.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.544 "adrfam": "ipv4", 00:28:58.544 "trsvcid": "$NVMF_PORT", 00:28:58.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.544 "hdgst": ${hdgst:-false}, 00:28:58.544 "ddgst": ${ddgst:-false} 00:28:58.544 }, 00:28:58.544 "method": "bdev_nvme_attach_controller" 00:28:58.544 } 00:28:58.544 EOF 00:28:58.544 )") 00:28:58.544 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:58.544 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:58.544 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:58.544 { 00:28:58.544 "params": { 00:28:58.544 "name": "Nvme$subsystem", 00:28:58.544 "trtype": "$TEST_TRANSPORT", 00:28:58.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.544 "adrfam": "ipv4", 00:28:58.544 "trsvcid": "$NVMF_PORT", 00:28:58.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.544 "hdgst": ${hdgst:-false}, 00:28:58.544 "ddgst": ${ddgst:-false} 00:28:58.544 }, 00:28:58.544 "method": "bdev_nvme_attach_controller" 00:28:58.544 } 00:28:58.544 EOF 00:28:58.544 )") 00:28:58.544 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:58.544 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:58.544 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:58.544 { 00:28:58.544 "params": { 00:28:58.544 "name": "Nvme$subsystem", 00:28:58.544 "trtype": "$TEST_TRANSPORT", 00:28:58.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.544 "adrfam": "ipv4", 00:28:58.544 "trsvcid": "$NVMF_PORT", 00:28:58.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.544 "hdgst": ${hdgst:-false}, 00:28:58.544 "ddgst": ${ddgst:-false} 00:28:58.544 }, 00:28:58.544 "method": "bdev_nvme_attach_controller" 00:28:58.544 } 00:28:58.544 EOF 00:28:58.544 )") 00:28:58.544 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:58.544 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:58.544 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:58.544 { 00:28:58.544 "params": { 00:28:58.544 "name": "Nvme$subsystem", 00:28:58.544 "trtype": "$TEST_TRANSPORT", 00:28:58.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.544 "adrfam": "ipv4", 00:28:58.544 "trsvcid": "$NVMF_PORT", 00:28:58.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.544 "hdgst": ${hdgst:-false}, 00:28:58.544 "ddgst": ${ddgst:-false} 00:28:58.544 }, 00:28:58.544 "method": "bdev_nvme_attach_controller" 00:28:58.544 } 00:28:58.544 EOF 00:28:58.544 )") 00:28:58.545 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:58.545 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:58.545 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:58.545 { 00:28:58.545 "params": { 00:28:58.545 "name": "Nvme$subsystem", 00:28:58.545 "trtype": "$TEST_TRANSPORT", 00:28:58.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.545 "adrfam": "ipv4", 00:28:58.545 "trsvcid": "$NVMF_PORT", 00:28:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.545 "hdgst": ${hdgst:-false}, 00:28:58.545 "ddgst": ${ddgst:-false} 00:28:58.545 }, 00:28:58.545 "method": "bdev_nvme_attach_controller" 00:28:58.545 } 00:28:58.545 EOF 00:28:58.545 )") 00:28:58.545 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:58.545 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:58.545 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:58.545 { 00:28:58.545 "params": { 00:28:58.545 "name": "Nvme$subsystem", 00:28:58.545 "trtype": "$TEST_TRANSPORT", 00:28:58.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.545 "adrfam": "ipv4", 00:28:58.545 "trsvcid": "$NVMF_PORT", 00:28:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.545 "hdgst": ${hdgst:-false}, 00:28:58.545 "ddgst": ${ddgst:-false} 00:28:58.545 }, 00:28:58.545 "method": "bdev_nvme_attach_controller" 00:28:58.545 } 00:28:58.545 EOF 00:28:58.545 )") 00:28:58.545 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:58.545 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:58.545 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:58.545 { 00:28:58.545 "params": { 00:28:58.545 "name": "Nvme$subsystem", 00:28:58.545 "trtype": "$TEST_TRANSPORT", 00:28:58.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.545 "adrfam": "ipv4", 00:28:58.545 "trsvcid": "$NVMF_PORT", 00:28:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.545 "hdgst": ${hdgst:-false}, 00:28:58.545 "ddgst": ${ddgst:-false} 00:28:58.545 }, 00:28:58.545 "method": "bdev_nvme_attach_controller" 00:28:58.545 } 00:28:58.545 EOF 00:28:58.545 )") 00:28:58.545 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:58.545 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:58.545 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:58.545 { 00:28:58.545 "params": { 00:28:58.545 "name": "Nvme$subsystem", 00:28:58.545 "trtype": "$TEST_TRANSPORT", 00:28:58.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.545 "adrfam": "ipv4", 00:28:58.545 "trsvcid": "$NVMF_PORT", 00:28:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.545 "hdgst": ${hdgst:-false}, 00:28:58.545 "ddgst": ${ddgst:-false} 00:28:58.545 }, 00:28:58.545 "method": "bdev_nvme_attach_controller" 00:28:58.545 } 00:28:58.545 EOF 00:28:58.545 )") 00:28:58.545 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:58.545 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:58.545 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:58.545 { 00:28:58.545 "params": { 00:28:58.545 "name": "Nvme$subsystem", 00:28:58.545 "trtype": "$TEST_TRANSPORT", 00:28:58.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.545 "adrfam": "ipv4", 00:28:58.545 "trsvcid": "$NVMF_PORT", 00:28:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.545 "hdgst": ${hdgst:-false}, 00:28:58.545 "ddgst": ${ddgst:-false} 00:28:58.545 }, 00:28:58.545 "method": "bdev_nvme_attach_controller" 00:28:58.545 } 00:28:58.545 EOF 00:28:58.545 )") 00:28:58.545 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:58.545 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:58.545 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:58.545 14:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:58.545 "params": { 00:28:58.545 "name": "Nvme1", 00:28:58.545 "trtype": "tcp", 00:28:58.545 "traddr": "10.0.0.2", 00:28:58.545 "adrfam": "ipv4", 00:28:58.545 "trsvcid": "4420", 00:28:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:58.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:58.545 "hdgst": false, 00:28:58.545 "ddgst": false 00:28:58.545 }, 00:28:58.545 "method": "bdev_nvme_attach_controller" 00:28:58.545 },{ 00:28:58.545 "params": { 00:28:58.545 "name": "Nvme2", 00:28:58.545 "trtype": "tcp", 00:28:58.545 "traddr": "10.0.0.2", 00:28:58.545 "adrfam": "ipv4", 00:28:58.545 "trsvcid": "4420", 00:28:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:58.545 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:58.545 "hdgst": false, 00:28:58.545 "ddgst": false 00:28:58.545 }, 00:28:58.545 "method": "bdev_nvme_attach_controller" 00:28:58.545 },{ 00:28:58.545 "params": { 00:28:58.545 "name": "Nvme3", 00:28:58.545 "trtype": "tcp", 00:28:58.545 "traddr": "10.0.0.2", 00:28:58.545 "adrfam": "ipv4", 00:28:58.545 "trsvcid": "4420", 00:28:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:58.545 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:58.545 "hdgst": false, 00:28:58.545 "ddgst": false 00:28:58.545 }, 00:28:58.545 "method": "bdev_nvme_attach_controller" 00:28:58.545 },{ 00:28:58.545 "params": { 00:28:58.545 "name": "Nvme4", 00:28:58.545 "trtype": "tcp", 00:28:58.545 "traddr": "10.0.0.2", 00:28:58.545 "adrfam": "ipv4", 00:28:58.545 "trsvcid": "4420", 00:28:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:58.545 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:58.545 "hdgst": false, 00:28:58.545 "ddgst": false 00:28:58.545 }, 00:28:58.545 "method": "bdev_nvme_attach_controller" 00:28:58.545 },{ 00:28:58.545 "params": { 00:28:58.545 "name": "Nvme5", 00:28:58.545 "trtype": "tcp", 00:28:58.545 "traddr": "10.0.0.2", 00:28:58.545 "adrfam": "ipv4", 00:28:58.545 "trsvcid": "4420", 00:28:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:58.545 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:58.545 "hdgst": false, 00:28:58.545 "ddgst": false 00:28:58.545 }, 00:28:58.545 "method": "bdev_nvme_attach_controller" 00:28:58.545 },{ 00:28:58.545 "params": { 00:28:58.545 "name": "Nvme6", 00:28:58.545 "trtype": "tcp", 00:28:58.545 "traddr": "10.0.0.2", 00:28:58.545 "adrfam": "ipv4", 00:28:58.545 "trsvcid": "4420", 00:28:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:58.545 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:58.545 "hdgst": false, 00:28:58.545 "ddgst": false 00:28:58.545 }, 00:28:58.545 "method": "bdev_nvme_attach_controller" 00:28:58.545 },{ 00:28:58.545 "params": { 00:28:58.545 "name": "Nvme7", 00:28:58.545 "trtype": "tcp", 00:28:58.545 "traddr": "10.0.0.2", 00:28:58.545 "adrfam": "ipv4", 00:28:58.545 "trsvcid": "4420", 00:28:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:58.545 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:58.545 "hdgst": false, 00:28:58.545 "ddgst": false 00:28:58.545 }, 00:28:58.545 "method": "bdev_nvme_attach_controller" 00:28:58.545 },{ 00:28:58.545 "params": { 00:28:58.545 "name": "Nvme8", 00:28:58.545 "trtype": "tcp", 00:28:58.545 "traddr": "10.0.0.2", 00:28:58.545 "adrfam": "ipv4", 00:28:58.545 "trsvcid": "4420", 00:28:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:58.545 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:58.545 "hdgst": false, 00:28:58.545 "ddgst": false 00:28:58.545 }, 00:28:58.545 "method": "bdev_nvme_attach_controller" 00:28:58.545 },{ 00:28:58.545 "params": { 00:28:58.545 "name": "Nvme9", 00:28:58.545 "trtype": "tcp", 00:28:58.545 "traddr": "10.0.0.2", 00:28:58.545 "adrfam": "ipv4", 00:28:58.545 "trsvcid": "4420", 00:28:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:58.545 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:58.545 "hdgst": false, 00:28:58.545 "ddgst": false 00:28:58.545 }, 00:28:58.545 "method": "bdev_nvme_attach_controller" 00:28:58.545 },{ 00:28:58.545 "params": { 00:28:58.545 "name": "Nvme10", 00:28:58.545 "trtype": "tcp", 00:28:58.545 "traddr": "10.0.0.2", 00:28:58.545 "adrfam": "ipv4", 00:28:58.545 "trsvcid": "4420", 00:28:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:58.545 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:58.545 "hdgst": false, 00:28:58.545 "ddgst": false 00:28:58.545 }, 00:28:58.545 "method": "bdev_nvme_attach_controller" 00:28:58.545 }' 00:28:58.545 [2024-07-10 14:31:07.848088] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:28:58.545 [2024-07-10 14:31:07.848222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472926 ] 00:28:58.545 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.545 [2024-07-10 14:31:07.977532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.804 [2024-07-10 14:31:08.219289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.701 Running I/O for 1 seconds... 00:29:01.633 00:29:01.633 Latency(us) 00:29:01.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.633 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.633 Verification LBA range: start 0x0 length 0x400 00:29:01.633 Nvme1n1 : 1.12 184.75 11.55 0.00 0.00 326463.79 24272.59 306028.85 00:29:01.633 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.634 Verification LBA range: start 0x0 length 0x400 00:29:01.634 Nvme2n1 : 1.23 207.34 12.96 0.00 0.00 298734.74 22622.06 298261.62 00:29:01.634 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.634 Verification LBA range: start 0x0 length 0x400 00:29:01.634 Nvme3n1 : 1.22 209.52 13.10 0.00 0.00 292525.89 22622.06 309135.74 00:29:01.634 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.634 Verification LBA range: start 0x0 length 0x400 00:29:01.634 Nvme4n1 : 1.23 207.80 12.99 0.00 0.00 289894.78 20874.43 313796.08 00:29:01.634 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.634 Verification LBA range: start 0x0 length 0x400 00:29:01.634 Nvme5n1 : 1.14 168.24 10.52 0.00 0.00 349924.31 26020.22 316902.97 00:29:01.634 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.634 Verification LBA range: start 0x0 length 0x400 00:29:01.634 Nvme6n1 : 1.15 170.40 10.65 0.00 0.00 337947.43 5024.43 306028.85 00:29:01.634 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.634 Verification LBA range: start 0x0 length 0x400 00:29:01.634 Nvme7n1 : 1.25 205.52 12.85 0.00 0.00 278623.19 20388.98 306028.85 00:29:01.634 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.634 Verification LBA range: start 0x0 length 0x400 00:29:01.634 Nvme8n1 : 1.25 204.33 12.77 0.00 0.00 275418.26 23787.14 315349.52 00:29:01.634 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.634 Verification LBA range: start 0x0 length 0x400 00:29:01.634 Nvme9n1 : 1.26 202.70 12.67 0.00 0.00 273117.11 25243.50 346418.44 00:29:01.634 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.634 Verification LBA range: start 0x0 length 0x400 00:29:01.634 Nvme10n1 : 1.27 201.43 12.59 0.00 0.00 270320.64 20388.98 324670.20 00:29:01.634 =================================================================================================================== 00:29:01.634 Total : 1962.03 122.63 0.00 0.00 296410.40 5024.43 346418.44 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:03.006 rmmod nvme_tcp 00:29:03.006 rmmod nvme_fabrics 00:29:03.006 rmmod nvme_keyring 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1472189 ']' 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1472189 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1472189 ']' 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1472189 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1472189 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:03.006 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1472189' 00:29:03.006 killing process with pid 1472189 00:29:03.007 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1472189 00:29:03.007 14:31:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1472189 00:29:06.284 14:31:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:06.285 14:31:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:06.285 14:31:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:06.285 14:31:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:06.285 14:31:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:06.285 14:31:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.285 14:31:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:06.285 14:31:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.185 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:08.185 00:29:08.185 real 0m17.480s 00:29:08.185 user 0m56.395s 00:29:08.185 sys 0m3.834s 00:29:08.185 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:08.185 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:08.185 ************************************ 00:29:08.185 END TEST nvmf_shutdown_tc1 00:29:08.185 ************************************ 00:29:08.185 14:31:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:29:08.185 14:31:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:08.185 14:31:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:08.185 14:31:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:08.185 14:31:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:08.185 ************************************ 00:29:08.185 START TEST nvmf_shutdown_tc2 00:29:08.185 ************************************ 00:29:08.185 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:29:08.185 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:29:08.185 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:08.185 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:08.185 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:08.185 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:08.185 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:08.185 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:08.185 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.185 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:08.185 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:08.186 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:08.186 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:08.186 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:08.186 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:08.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:08.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:29:08.186 00:29:08.186 --- 10.0.0.2 ping statistics --- 00:29:08.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.186 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:08.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:08.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:29:08.186 00:29:08.186 --- 10.0.0.1 ping statistics --- 00:29:08.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.186 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:08.186 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:08.187 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.187 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1474202 00:29:08.187 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:08.187 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1474202 00:29:08.187 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1474202 ']' 00:29:08.187 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.187 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:08.187 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.187 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:08.187 14:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.187 [2024-07-10 14:31:17.570216] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:29:08.187 [2024-07-10 14:31:17.570367] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.187 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.445 [2024-07-10 14:31:17.711237] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:08.704 [2024-07-10 14:31:17.975257] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.704 [2024-07-10 14:31:17.975331] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.704 [2024-07-10 14:31:17.975359] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:08.704 [2024-07-10 14:31:17.975380] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:08.704 [2024-07-10 14:31:17.975402] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.704 [2024-07-10 14:31:17.975552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:08.704 [2024-07-10 14:31:17.975659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:08.704 [2024-07-10 14:31:17.975702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.704 [2024-07-10 14:31:17.975713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.271 [2024-07-10 14:31:18.568967] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.271 14:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.271 Malloc1 00:29:09.271 [2024-07-10 14:31:18.704769] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:09.530 Malloc2 00:29:09.530 Malloc3 00:29:09.530 Malloc4 00:29:09.788 Malloc5 00:29:09.788 Malloc6 00:29:09.788 Malloc7 00:29:10.047 Malloc8 00:29:10.047 Malloc9 00:29:10.306 Malloc10 00:29:10.306 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.306 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:10.306 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:10.306 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.306 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1474501 00:29:10.306 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1474501 /var/tmp/bdevperf.sock 00:29:10.306 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1474501 ']' 00:29:10.306 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:10.306 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:10.306 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:10.306 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:10.306 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:10.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:10.306 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:29:10.306 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:10.306 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:29:10.306 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.306 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:10.306 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:10.306 { 00:29:10.306 "params": { 00:29:10.306 "name": "Nvme$subsystem", 00:29:10.306 "trtype": "$TEST_TRANSPORT", 00:29:10.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.306 "adrfam": "ipv4", 00:29:10.306 "trsvcid": "$NVMF_PORT", 00:29:10.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.306 "hdgst": ${hdgst:-false}, 00:29:10.306 "ddgst": ${ddgst:-false} 00:29:10.306 }, 00:29:10.306 "method": "bdev_nvme_attach_controller" 00:29:10.306 } 00:29:10.306 EOF 00:29:10.306 )") 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:10.307 { 00:29:10.307 "params": { 00:29:10.307 "name": "Nvme$subsystem", 00:29:10.307 "trtype": "$TEST_TRANSPORT", 00:29:10.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.307 "adrfam": "ipv4", 00:29:10.307 "trsvcid": "$NVMF_PORT", 00:29:10.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.307 "hdgst": ${hdgst:-false}, 00:29:10.307 "ddgst": ${ddgst:-false} 00:29:10.307 }, 00:29:10.307 "method": "bdev_nvme_attach_controller" 00:29:10.307 } 00:29:10.307 EOF 00:29:10.307 )") 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:10.307 { 00:29:10.307 "params": { 00:29:10.307 "name": "Nvme$subsystem", 00:29:10.307 "trtype": "$TEST_TRANSPORT", 00:29:10.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.307 "adrfam": "ipv4", 00:29:10.307 "trsvcid": "$NVMF_PORT", 00:29:10.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.307 "hdgst": ${hdgst:-false}, 00:29:10.307 "ddgst": ${ddgst:-false} 00:29:10.307 }, 00:29:10.307 "method": "bdev_nvme_attach_controller" 00:29:10.307 } 00:29:10.307 EOF 00:29:10.307 )") 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:10.307 { 00:29:10.307 "params": { 00:29:10.307 "name": "Nvme$subsystem", 00:29:10.307 "trtype": "$TEST_TRANSPORT", 00:29:10.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.307 "adrfam": "ipv4", 00:29:10.307 "trsvcid": "$NVMF_PORT", 00:29:10.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.307 "hdgst": ${hdgst:-false}, 00:29:10.307 "ddgst": ${ddgst:-false} 00:29:10.307 }, 00:29:10.307 "method": "bdev_nvme_attach_controller" 00:29:10.307 } 00:29:10.307 EOF 00:29:10.307 )") 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:10.307 { 00:29:10.307 "params": { 00:29:10.307 "name": "Nvme$subsystem", 00:29:10.307 "trtype": "$TEST_TRANSPORT", 00:29:10.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.307 "adrfam": "ipv4", 00:29:10.307 "trsvcid": "$NVMF_PORT", 00:29:10.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.307 "hdgst": ${hdgst:-false}, 00:29:10.307 "ddgst": ${ddgst:-false} 00:29:10.307 }, 00:29:10.307 "method": "bdev_nvme_attach_controller" 00:29:10.307 } 00:29:10.307 EOF 00:29:10.307 )") 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:10.307 { 00:29:10.307 "params": { 00:29:10.307 "name": "Nvme$subsystem", 00:29:10.307 "trtype": "$TEST_TRANSPORT", 00:29:10.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.307 "adrfam": "ipv4", 00:29:10.307 "trsvcid": "$NVMF_PORT", 00:29:10.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.307 "hdgst": ${hdgst:-false}, 00:29:10.307 "ddgst": ${ddgst:-false} 00:29:10.307 }, 00:29:10.307 "method": "bdev_nvme_attach_controller" 00:29:10.307 } 00:29:10.307 EOF 00:29:10.307 )") 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:10.307 { 00:29:10.307 "params": { 00:29:10.307 "name": "Nvme$subsystem", 00:29:10.307 "trtype": "$TEST_TRANSPORT", 00:29:10.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.307 "adrfam": "ipv4", 00:29:10.307 "trsvcid": "$NVMF_PORT", 00:29:10.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.307 "hdgst": ${hdgst:-false}, 00:29:10.307 "ddgst": ${ddgst:-false} 00:29:10.307 }, 00:29:10.307 "method": "bdev_nvme_attach_controller" 00:29:10.307 } 00:29:10.307 EOF 00:29:10.307 )") 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:10.307 { 00:29:10.307 "params": { 00:29:10.307 "name": "Nvme$subsystem", 00:29:10.307 "trtype": "$TEST_TRANSPORT", 00:29:10.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.307 "adrfam": "ipv4", 00:29:10.307 "trsvcid": "$NVMF_PORT", 00:29:10.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.307 "hdgst": ${hdgst:-false}, 00:29:10.307 "ddgst": ${ddgst:-false} 00:29:10.307 }, 00:29:10.307 "method": "bdev_nvme_attach_controller" 00:29:10.307 } 00:29:10.307 EOF 00:29:10.307 )") 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:10.307 { 00:29:10.307 "params": { 00:29:10.307 "name": "Nvme$subsystem", 00:29:10.307 "trtype": "$TEST_TRANSPORT", 00:29:10.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.307 "adrfam": "ipv4", 00:29:10.307 "trsvcid": "$NVMF_PORT", 00:29:10.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.307 "hdgst": ${hdgst:-false}, 00:29:10.307 "ddgst": ${ddgst:-false} 00:29:10.307 }, 00:29:10.307 "method": "bdev_nvme_attach_controller" 00:29:10.307 } 00:29:10.307 EOF 00:29:10.307 )") 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:10.307 { 00:29:10.307 "params": { 00:29:10.307 "name": "Nvme$subsystem", 00:29:10.307 "trtype": "$TEST_TRANSPORT", 00:29:10.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.307 "adrfam": "ipv4", 00:29:10.307 "trsvcid": "$NVMF_PORT", 00:29:10.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.307 "hdgst": ${hdgst:-false}, 00:29:10.307 "ddgst": ${ddgst:-false} 00:29:10.307 }, 00:29:10.307 "method": "bdev_nvme_attach_controller" 00:29:10.307 } 00:29:10.307 EOF 00:29:10.307 )") 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:29:10.307 14:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:10.307 "params": { 00:29:10.307 "name": "Nvme1", 00:29:10.307 "trtype": "tcp", 00:29:10.307 "traddr": "10.0.0.2", 00:29:10.307 "adrfam": "ipv4", 00:29:10.307 "trsvcid": "4420", 00:29:10.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:10.307 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:10.307 "hdgst": false, 00:29:10.307 "ddgst": false 00:29:10.307 }, 00:29:10.308 "method": "bdev_nvme_attach_controller" 00:29:10.308 },{ 00:29:10.308 "params": { 00:29:10.308 "name": "Nvme2", 00:29:10.308 "trtype": "tcp", 00:29:10.308 "traddr": "10.0.0.2", 00:29:10.308 "adrfam": "ipv4", 00:29:10.308 "trsvcid": "4420", 00:29:10.308 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:10.308 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:10.308 "hdgst": false, 00:29:10.308 "ddgst": false 00:29:10.308 }, 00:29:10.308 "method": "bdev_nvme_attach_controller" 00:29:10.308 },{ 00:29:10.308 "params": { 00:29:10.308 "name": "Nvme3", 00:29:10.308 "trtype": "tcp", 00:29:10.308 "traddr": "10.0.0.2", 00:29:10.308 "adrfam": "ipv4", 00:29:10.308 "trsvcid": "4420", 00:29:10.308 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:10.308 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:10.308 "hdgst": false, 00:29:10.308 "ddgst": false 00:29:10.308 }, 00:29:10.308 "method": "bdev_nvme_attach_controller" 00:29:10.308 },{ 00:29:10.308 "params": { 00:29:10.308 "name": "Nvme4", 00:29:10.308 "trtype": "tcp", 00:29:10.308 "traddr": "10.0.0.2", 00:29:10.308 "adrfam": "ipv4", 00:29:10.308 "trsvcid": "4420", 00:29:10.308 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:10.308 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:10.308 "hdgst": false, 00:29:10.308 "ddgst": false 00:29:10.308 }, 00:29:10.308 "method": "bdev_nvme_attach_controller" 00:29:10.308 },{ 00:29:10.308 "params": { 00:29:10.308 "name": "Nvme5", 00:29:10.308 "trtype": "tcp", 00:29:10.308 "traddr": "10.0.0.2", 00:29:10.308 "adrfam": "ipv4", 00:29:10.308 "trsvcid": "4420", 00:29:10.308 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:10.308 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:10.308 "hdgst": false, 00:29:10.308 "ddgst": false 00:29:10.308 }, 00:29:10.308 "method": "bdev_nvme_attach_controller" 00:29:10.308 },{ 00:29:10.308 "params": { 00:29:10.308 "name": "Nvme6", 00:29:10.308 "trtype": "tcp", 00:29:10.308 "traddr": "10.0.0.2", 00:29:10.308 "adrfam": "ipv4", 00:29:10.308 "trsvcid": "4420", 00:29:10.308 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:10.308 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:10.308 "hdgst": false, 00:29:10.308 "ddgst": false 00:29:10.308 }, 00:29:10.308 "method": "bdev_nvme_attach_controller" 00:29:10.308 },{ 00:29:10.308 "params": { 00:29:10.308 "name": "Nvme7", 00:29:10.308 "trtype": "tcp", 00:29:10.308 "traddr": "10.0.0.2", 00:29:10.308 "adrfam": "ipv4", 00:29:10.308 "trsvcid": "4420", 00:29:10.308 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:10.308 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:10.308 "hdgst": false, 00:29:10.308 "ddgst": false 00:29:10.308 }, 00:29:10.308 "method": "bdev_nvme_attach_controller" 00:29:10.308 },{ 00:29:10.308 "params": { 00:29:10.308 "name": "Nvme8", 00:29:10.308 "trtype": "tcp", 00:29:10.308 "traddr": "10.0.0.2", 00:29:10.308 "adrfam": "ipv4", 00:29:10.308 "trsvcid": "4420", 00:29:10.308 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:10.308 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:10.308 "hdgst": false, 00:29:10.308 "ddgst": false 00:29:10.308 }, 00:29:10.308 "method": "bdev_nvme_attach_controller" 00:29:10.308 },{ 00:29:10.308 "params": { 00:29:10.308 "name": "Nvme9", 00:29:10.308 "trtype": "tcp", 00:29:10.308 "traddr": "10.0.0.2", 00:29:10.308 "adrfam": "ipv4", 00:29:10.308 "trsvcid": "4420", 00:29:10.308 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:10.308 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:10.308 "hdgst": false, 00:29:10.308 "ddgst": false 00:29:10.308 }, 00:29:10.308 "method": "bdev_nvme_attach_controller" 00:29:10.308 },{ 00:29:10.308 "params": { 00:29:10.308 "name": "Nvme10", 00:29:10.308 "trtype": "tcp", 00:29:10.308 "traddr": "10.0.0.2", 00:29:10.308 "adrfam": "ipv4", 00:29:10.308 "trsvcid": "4420", 00:29:10.308 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:10.308 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:10.308 "hdgst": false, 00:29:10.308 "ddgst": false 00:29:10.308 }, 00:29:10.308 "method": "bdev_nvme_attach_controller" 00:29:10.308 }' 00:29:10.308 [2024-07-10 14:31:19.705162] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:29:10.308 [2024-07-10 14:31:19.705314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1474501 ] 00:29:10.308 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.566 [2024-07-10 14:31:19.835484] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.825 [2024-07-10 14:31:20.079667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.354 Running I/O for 10 seconds... 00:29:13.354 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:13.354 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:13.354 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:13.354 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.354 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.354 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.354 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:13.354 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:13.354 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:29:13.354 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:29:13.354 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:29:13.354 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:29:13.354 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:13.354 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:13.354 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:13.354 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.354 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.354 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.354 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:29:13.354 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:29:13.354 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:13.611 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:13.611 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:13.611 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:13.611 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:13.611 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.611 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.611 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.611 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:29:13.611 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:29:13.611 14:31:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:13.870 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:13.870 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:13.870 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:13.870 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:13.870 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.870 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.870 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.870 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:29:13.870 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:29:13.870 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1474501 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1474501 ']' 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1474501 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1474501 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1474501' 00:29:14.128 killing process with pid 1474501 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1474501 00:29:14.128 14:31:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1474501 00:29:14.128 Received shutdown signal, test time was about 1.282449 seconds 00:29:14.128 00:29:14.128 Latency(us) 00:29:14.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.128 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:14.128 Verification LBA range: start 0x0 length 0x400 00:29:14.128 Nvme1n1 : 1.23 155.93 9.75 0.00 0.00 406443.55 38836.15 341758.10 00:29:14.128 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:14.128 Verification LBA range: start 0x0 length 0x400 00:29:14.128 Nvme2n1 : 1.27 204.62 12.79 0.00 0.00 304115.79 3131.16 307582.29 00:29:14.128 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:14.128 Verification LBA range: start 0x0 length 0x400 00:29:14.128 Nvme3n1 : 1.28 199.75 12.48 0.00 0.00 307359.29 24855.13 332437.43 00:29:14.128 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:14.128 Verification LBA range: start 0x0 length 0x400 00:29:14.128 Nvme4n1 : 1.26 203.65 12.73 0.00 0.00 296253.44 19612.25 326223.64 00:29:14.128 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:14.128 Verification LBA range: start 0x0 length 0x400 00:29:14.128 Nvme5n1 : 1.20 159.34 9.96 0.00 0.00 371174.97 29515.47 323116.75 00:29:14.128 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:14.128 Verification LBA range: start 0x0 length 0x400 00:29:14.128 Nvme6n1 : 1.26 203.09 12.69 0.00 0.00 287166.39 23787.14 329330.54 00:29:14.128 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:14.128 Verification LBA range: start 0x0 length 0x400 00:29:14.128 Nvme7n1 : 1.27 200.95 12.56 0.00 0.00 285512.06 23981.32 327777.09 00:29:14.128 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:14.128 Verification LBA range: start 0x0 length 0x400 00:29:14.128 Nvme8n1 : 1.24 209.45 13.09 0.00 0.00 268235.67 11650.84 320009.86 00:29:14.128 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:14.128 Verification LBA range: start 0x0 length 0x400 00:29:14.128 Nvme9n1 : 1.25 153.72 9.61 0.00 0.00 359357.76 29709.65 394575.27 00:29:14.128 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:14.128 Verification LBA range: start 0x0 length 0x400 00:29:14.128 Nvme10n1 : 1.22 157.54 9.85 0.00 0.00 342850.12 22719.15 330883.98 00:29:14.128 =================================================================================================================== 00:29:14.128 Total : 1848.04 115.50 0.00 0.00 317503.72 3131.16 394575.27 00:29:15.533 14:31:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1474202 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:16.516 rmmod nvme_tcp 00:29:16.516 rmmod nvme_fabrics 00:29:16.516 rmmod nvme_keyring 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1474202 ']' 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1474202 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1474202 ']' 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1474202 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1474202 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1474202' 00:29:16.516 killing process with pid 1474202 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1474202 00:29:16.516 14:31:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1474202 00:29:19.792 14:31:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:19.792 14:31:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:19.792 14:31:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:19.792 14:31:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:19.792 14:31:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:19.792 14:31:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.792 14:31:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:19.792 14:31:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:21.694 00:29:21.694 real 0m13.508s 00:29:21.694 user 0m45.748s 00:29:21.694 sys 0m2.235s 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.694 ************************************ 00:29:21.694 END TEST nvmf_shutdown_tc2 00:29:21.694 ************************************ 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:21.694 ************************************ 00:29:21.694 START TEST nvmf_shutdown_tc3 00:29:21.694 ************************************ 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:21.694 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:21.694 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:21.694 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:21.695 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:21.695 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:21.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:21.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:29:21.695 00:29:21.695 --- 10.0.0.2 ping statistics --- 00:29:21.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.695 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:21.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:21.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:29:21.695 00:29:21.695 --- 10.0.0.1 ping statistics --- 00:29:21.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.695 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:21.695 14:31:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:21.695 14:31:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1475932 00:29:21.695 14:31:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:21.695 14:31:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1475932 00:29:21.695 14:31:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1475932 ']' 00:29:21.695 14:31:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.695 14:31:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:21.695 14:31:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.695 14:31:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:21.695 14:31:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:21.695 [2024-07-10 14:31:31.108015] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:29:21.695 [2024-07-10 14:31:31.108163] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:21.954 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.954 [2024-07-10 14:31:31.253442] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:22.212 [2024-07-10 14:31:31.481412] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.212 [2024-07-10 14:31:31.481502] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.212 [2024-07-10 14:31:31.481526] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.212 [2024-07-10 14:31:31.481544] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.212 [2024-07-10 14:31:31.481562] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.212 [2024-07-10 14:31:31.481698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:22.212 [2024-07-10 14:31:31.481808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:22.212 [2024-07-10 14:31:31.481847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.212 [2024-07-10 14:31:31.481858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:22.777 [2024-07-10 14:31:32.084955] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.777 14:31:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:22.777 Malloc1 00:29:22.777 [2024-07-10 14:31:32.220611] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.035 Malloc2 00:29:23.035 Malloc3 00:29:23.035 Malloc4 00:29:23.294 Malloc5 00:29:23.294 Malloc6 00:29:23.551 Malloc7 00:29:23.551 Malloc8 00:29:23.551 Malloc9 00:29:23.809 Malloc10 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1476161 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1476161 /var/tmp/bdevperf.sock 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1476161 ']' 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:23.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.809 { 00:29:23.809 "params": { 00:29:23.809 "name": "Nvme$subsystem", 00:29:23.809 "trtype": "$TEST_TRANSPORT", 00:29:23.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.809 "adrfam": "ipv4", 00:29:23.809 "trsvcid": "$NVMF_PORT", 00:29:23.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.809 "hdgst": ${hdgst:-false}, 00:29:23.809 "ddgst": ${ddgst:-false} 00:29:23.809 }, 00:29:23.809 "method": "bdev_nvme_attach_controller" 00:29:23.809 } 00:29:23.809 EOF 00:29:23.809 )") 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.809 { 00:29:23.809 "params": { 00:29:23.809 "name": "Nvme$subsystem", 00:29:23.809 "trtype": "$TEST_TRANSPORT", 00:29:23.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.809 "adrfam": "ipv4", 00:29:23.809 "trsvcid": "$NVMF_PORT", 00:29:23.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.809 "hdgst": ${hdgst:-false}, 00:29:23.809 "ddgst": ${ddgst:-false} 00:29:23.809 }, 00:29:23.809 "method": "bdev_nvme_attach_controller" 00:29:23.809 } 00:29:23.809 EOF 00:29:23.809 )") 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.809 { 00:29:23.809 "params": { 00:29:23.809 "name": "Nvme$subsystem", 00:29:23.809 "trtype": "$TEST_TRANSPORT", 00:29:23.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.809 "adrfam": "ipv4", 00:29:23.809 "trsvcid": "$NVMF_PORT", 00:29:23.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.809 "hdgst": ${hdgst:-false}, 00:29:23.809 "ddgst": ${ddgst:-false} 00:29:23.809 }, 00:29:23.809 "method": "bdev_nvme_attach_controller" 00:29:23.809 } 00:29:23.809 EOF 00:29:23.809 )") 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.809 { 00:29:23.809 "params": { 00:29:23.809 "name": "Nvme$subsystem", 00:29:23.809 "trtype": "$TEST_TRANSPORT", 00:29:23.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.809 "adrfam": "ipv4", 00:29:23.809 "trsvcid": "$NVMF_PORT", 00:29:23.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.809 "hdgst": ${hdgst:-false}, 00:29:23.809 "ddgst": ${ddgst:-false} 00:29:23.809 }, 00:29:23.809 "method": "bdev_nvme_attach_controller" 00:29:23.809 } 00:29:23.809 EOF 00:29:23.809 )") 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.809 { 00:29:23.809 "params": { 00:29:23.809 "name": "Nvme$subsystem", 00:29:23.809 "trtype": "$TEST_TRANSPORT", 00:29:23.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.809 "adrfam": "ipv4", 00:29:23.809 "trsvcid": "$NVMF_PORT", 00:29:23.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.809 "hdgst": ${hdgst:-false}, 00:29:23.809 "ddgst": ${ddgst:-false} 00:29:23.809 }, 00:29:23.809 "method": "bdev_nvme_attach_controller" 00:29:23.809 } 00:29:23.809 EOF 00:29:23.809 )") 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.809 { 00:29:23.809 "params": { 00:29:23.809 "name": "Nvme$subsystem", 00:29:23.809 "trtype": "$TEST_TRANSPORT", 00:29:23.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.809 "adrfam": "ipv4", 00:29:23.809 "trsvcid": "$NVMF_PORT", 00:29:23.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.809 "hdgst": ${hdgst:-false}, 00:29:23.809 "ddgst": ${ddgst:-false} 00:29:23.809 }, 00:29:23.809 "method": "bdev_nvme_attach_controller" 00:29:23.809 } 00:29:23.809 EOF 00:29:23.809 )") 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.809 { 00:29:23.809 "params": { 00:29:23.809 "name": "Nvme$subsystem", 00:29:23.809 "trtype": "$TEST_TRANSPORT", 00:29:23.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.809 "adrfam": "ipv4", 00:29:23.809 "trsvcid": "$NVMF_PORT", 00:29:23.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.809 "hdgst": ${hdgst:-false}, 00:29:23.809 "ddgst": ${ddgst:-false} 00:29:23.809 }, 00:29:23.809 "method": "bdev_nvme_attach_controller" 00:29:23.809 } 00:29:23.809 EOF 00:29:23.809 )") 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.809 { 00:29:23.809 "params": { 00:29:23.809 "name": "Nvme$subsystem", 00:29:23.809 "trtype": "$TEST_TRANSPORT", 00:29:23.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.809 "adrfam": "ipv4", 00:29:23.809 "trsvcid": "$NVMF_PORT", 00:29:23.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.809 "hdgst": ${hdgst:-false}, 00:29:23.809 "ddgst": ${ddgst:-false} 00:29:23.809 }, 00:29:23.809 "method": "bdev_nvme_attach_controller" 00:29:23.809 } 00:29:23.809 EOF 00:29:23.809 )") 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.809 { 00:29:23.809 "params": { 00:29:23.809 "name": "Nvme$subsystem", 00:29:23.809 "trtype": "$TEST_TRANSPORT", 00:29:23.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.809 "adrfam": "ipv4", 00:29:23.809 "trsvcid": "$NVMF_PORT", 00:29:23.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.809 "hdgst": ${hdgst:-false}, 00:29:23.809 "ddgst": ${ddgst:-false} 00:29:23.809 }, 00:29:23.809 "method": "bdev_nvme_attach_controller" 00:29:23.809 } 00:29:23.809 EOF 00:29:23.809 )") 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.809 { 00:29:23.809 "params": { 00:29:23.809 "name": "Nvme$subsystem", 00:29:23.809 "trtype": "$TEST_TRANSPORT", 00:29:23.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.809 "adrfam": "ipv4", 00:29:23.809 "trsvcid": "$NVMF_PORT", 00:29:23.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.809 "hdgst": ${hdgst:-false}, 00:29:23.809 "ddgst": ${ddgst:-false} 00:29:23.809 }, 00:29:23.809 "method": "bdev_nvme_attach_controller" 00:29:23.809 } 00:29:23.809 EOF 00:29:23.809 )") 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:29:23.809 14:31:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:23.809 "params": { 00:29:23.809 "name": "Nvme1", 00:29:23.809 "trtype": "tcp", 00:29:23.809 "traddr": "10.0.0.2", 00:29:23.809 "adrfam": "ipv4", 00:29:23.809 "trsvcid": "4420", 00:29:23.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:23.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:23.809 "hdgst": false, 00:29:23.809 "ddgst": false 00:29:23.809 }, 00:29:23.809 "method": "bdev_nvme_attach_controller" 00:29:23.809 },{ 00:29:23.809 "params": { 00:29:23.809 "name": "Nvme2", 00:29:23.809 "trtype": "tcp", 00:29:23.809 "traddr": "10.0.0.2", 00:29:23.809 "adrfam": "ipv4", 00:29:23.809 "trsvcid": "4420", 00:29:23.809 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:23.809 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:23.809 "hdgst": false, 00:29:23.809 "ddgst": false 00:29:23.809 }, 00:29:23.809 "method": "bdev_nvme_attach_controller" 00:29:23.809 },{ 00:29:23.809 "params": { 00:29:23.809 "name": "Nvme3", 00:29:23.809 "trtype": "tcp", 00:29:23.809 "traddr": "10.0.0.2", 00:29:23.809 "adrfam": "ipv4", 00:29:23.809 "trsvcid": "4420", 00:29:23.809 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:23.809 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:23.809 "hdgst": false, 00:29:23.809 "ddgst": false 00:29:23.809 }, 00:29:23.809 "method": "bdev_nvme_attach_controller" 00:29:23.809 },{ 00:29:23.809 "params": { 00:29:23.809 "name": "Nvme4", 00:29:23.809 "trtype": "tcp", 00:29:23.809 "traddr": "10.0.0.2", 00:29:23.809 "adrfam": "ipv4", 00:29:23.809 "trsvcid": "4420", 00:29:23.809 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:23.809 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:23.809 "hdgst": false, 00:29:23.809 "ddgst": false 00:29:23.809 }, 00:29:23.809 "method": "bdev_nvme_attach_controller" 00:29:23.809 },{ 00:29:23.809 "params": { 00:29:23.809 "name": "Nvme5", 00:29:23.809 "trtype": "tcp", 00:29:23.809 "traddr": "10.0.0.2", 00:29:23.809 "adrfam": "ipv4", 00:29:23.809 "trsvcid": "4420", 00:29:23.809 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:23.809 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:23.809 "hdgst": false, 00:29:23.809 "ddgst": false 00:29:23.809 }, 00:29:23.809 "method": "bdev_nvme_attach_controller" 00:29:23.809 },{ 00:29:23.809 "params": { 00:29:23.809 "name": "Nvme6", 00:29:23.809 "trtype": "tcp", 00:29:23.809 "traddr": "10.0.0.2", 00:29:23.809 "adrfam": "ipv4", 00:29:23.809 "trsvcid": "4420", 00:29:23.809 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:23.809 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:23.809 "hdgst": false, 00:29:23.809 "ddgst": false 00:29:23.809 }, 00:29:23.809 "method": "bdev_nvme_attach_controller" 00:29:23.809 },{ 00:29:23.809 "params": { 00:29:23.809 "name": "Nvme7", 00:29:23.809 "trtype": "tcp", 00:29:23.809 "traddr": "10.0.0.2", 00:29:23.809 "adrfam": "ipv4", 00:29:23.809 "trsvcid": "4420", 00:29:23.810 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:23.810 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:23.810 "hdgst": false, 00:29:23.810 "ddgst": false 00:29:23.810 }, 00:29:23.810 "method": "bdev_nvme_attach_controller" 00:29:23.810 },{ 00:29:23.810 "params": { 00:29:23.810 "name": "Nvme8", 00:29:23.810 "trtype": "tcp", 00:29:23.810 "traddr": "10.0.0.2", 00:29:23.810 "adrfam": "ipv4", 00:29:23.810 "trsvcid": "4420", 00:29:23.810 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:23.810 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:23.810 "hdgst": false, 00:29:23.810 "ddgst": false 00:29:23.810 }, 00:29:23.810 "method": "bdev_nvme_attach_controller" 00:29:23.810 },{ 00:29:23.810 "params": { 00:29:23.810 "name": "Nvme9", 00:29:23.810 "trtype": "tcp", 00:29:23.810 "traddr": "10.0.0.2", 00:29:23.810 "adrfam": "ipv4", 00:29:23.810 "trsvcid": "4420", 00:29:23.810 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:23.810 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:23.810 "hdgst": false, 00:29:23.810 "ddgst": false 00:29:23.810 }, 00:29:23.810 "method": "bdev_nvme_attach_controller" 00:29:23.810 },{ 00:29:23.810 "params": { 00:29:23.810 "name": "Nvme10", 00:29:23.810 "trtype": "tcp", 00:29:23.810 "traddr": "10.0.0.2", 00:29:23.810 "adrfam": "ipv4", 00:29:23.810 "trsvcid": "4420", 00:29:23.810 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:23.810 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:23.810 "hdgst": false, 00:29:23.810 "ddgst": false 00:29:23.810 }, 00:29:23.810 "method": "bdev_nvme_attach_controller" 00:29:23.810 }' 00:29:23.810 [2024-07-10 14:31:33.226539] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:29:23.810 [2024-07-10 14:31:33.226686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476161 ] 00:29:24.066 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.066 [2024-07-10 14:31:33.363595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.324 [2024-07-10 14:31:33.601377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.221 Running I/O for 10 seconds... 00:29:26.479 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:26.479 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:29:26.479 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:26.479 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.479 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:26.479 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.479 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:26.479 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:26.479 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:26.479 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:29:26.479 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:29:26.479 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:29:26.479 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:29:26.479 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:26.479 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:26.479 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:26.479 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.479 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:26.737 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.737 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:29:26.737 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:29:26.737 14:31:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:26.737 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:26.995 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:26.996 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:26.996 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:26.996 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.996 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:26.996 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.996 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:29:26.996 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:29:26.996 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:27.268 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:27.268 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:27.268 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:27.268 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:27.268 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.268 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:27.269 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.269 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:29:27.269 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:29:27.269 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:29:27.269 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:29:27.269 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:29:27.269 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1475932 00:29:27.269 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1475932 ']' 00:29:27.269 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1475932 00:29:27.269 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:29:27.269 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:27.269 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1475932 00:29:27.269 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:27.269 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:27.269 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1475932' 00:29:27.269 killing process with pid 1475932 00:29:27.269 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1475932 00:29:27.269 14:31:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1475932 00:29:27.269 [2024-07-10 14:31:36.587204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.587999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.588504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.594415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.594493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.594516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.594534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.594551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.594568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.594586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.594603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.594620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.594638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.594655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.269 [2024-07-10 14:31:36.594672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.594701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.594718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.594736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.594767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.594784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.594801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.594818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.594834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.594851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.594867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.594884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.594901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.594918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.594940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.594959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.594975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.594992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.595619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.601999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.602016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.602033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.602049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.602066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.602083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.602099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.602116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.602133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.602149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.602166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.602183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.602200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.602217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.270 [2024-07-10 14:31:36.602233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.602730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.605509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.605553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.605844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.605878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.605913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.605933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.605950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.605967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.605986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-10 14:31:36.606181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same id:0 cdw10:00000000 cdw11:00000000 00:29:27.271 with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.271 [2024-07-10 14:31:36.606218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.271 [2024-07-10 14:31:36.606252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.271 [2024-07-10 14:31:36.606276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-10 14:31:36.606294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same id:0 cdw10:00000000 cdw11:00000000 00:29:27.271 with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.271 [2024-07-10 14:31:36.606335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.271 [2024-07-10 14:31:36.606352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.271 [2024-07-10 14:31:36.606370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is [2024-07-10 14:31:36.606387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same same with the state(5) to be set 00:29:27.271 with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.271 [2024-07-10 14:31:36.606532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.271 [2024-07-10 14:31:36.606550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.271 [2024-07-10 14:31:36.606585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.271 [2024-07-10 14:31:36.606602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.271 [2024-07-10 14:31:36.606620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same [2024-07-10 14:31:36.606638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cwith the state(5) to be set 00:29:27.271 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.271 [2024-07-10 14:31:36.606658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.271 [2024-07-10 14:31:36.606663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.272 [2024-07-10 14:31:36.606675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.272 [2024-07-10 14:31:36.606693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.606703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.272 [2024-07-10 14:31:36.606714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:29:27.272 [2024-07-10 14:31:36.606720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.272 [2024-07-10 14:31:36.606738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.272 [2024-07-10 14:31:36.606754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.272 [2024-07-10 14:31:36.606772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.272 [2024-07-10 14:31:36.606790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.272 [2024-07-10 14:31:36.606824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-10 14:31:36.606829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same id:0 cdw10:00000000 cdw11:00000000 00:29:27.272 with the state(5) to be set 00:29:27.272 [2024-07-10 14:31:36.606852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.272 [2024-07-10 14:31:36.606854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.606870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.272 [2024-07-10 14:31:36.606877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.272 [2024-07-10 14:31:36.606888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.272 [2024-07-10 14:31:36.606899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.606905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.272 [2024-07-10 14:31:36.606921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-10 14:31:36.606922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same id:0 cdw10:00000000 cdw11:00000000 00:29:27.272 with the state(5) to be set 00:29:27.272 [2024-07-10 14:31:36.606942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same [2024-07-10 14:31:36.606944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cwith the state(5) to be set 00:29:27.272 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.606965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.272 [2024-07-10 14:31:36.606969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.272 [2024-07-10 14:31:36.606984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.272 [2024-07-10 14:31:36.606989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.607001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.272 [2024-07-10 14:31:36.607008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:29:27.272 [2024-07-10 14:31:36.607018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:27.272 [2024-07-10 14:31:36.607074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.272 [2024-07-10 14:31:36.607102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.607125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.272 [2024-07-10 14:31:36.607145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.607166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.272 [2024-07-10 14:31:36.607185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.607206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.272 [2024-07-10 14:31:36.607246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.607267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:29:27.272 [2024-07-10 14:31:36.607332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.272 [2024-07-10 14:31:36.607360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.607383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.272 [2024-07-10 14:31:36.607403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.607432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.272 [2024-07-10 14:31:36.607455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.607486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.272 [2024-07-10 14:31:36.607506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.607530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:29:27.272 [2024-07-10 14:31:36.607722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.272 [2024-07-10 14:31:36.607757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.607807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.272 [2024-07-10 14:31:36.607830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.607856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.272 [2024-07-10 14:31:36.607878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.607901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.272 [2024-07-10 14:31:36.607922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.607961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.272 [2024-07-10 14:31:36.607982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.608005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.272 [2024-07-10 14:31:36.608026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.608048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.272 [2024-07-10 14:31:36.608069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.608092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.272 [2024-07-10 14:31:36.608112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.608135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.272 [2024-07-10 14:31:36.608155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.608177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.272 [2024-07-10 14:31:36.608197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.608220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.272 [2024-07-10 14:31:36.608240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.608263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.272 [2024-07-10 14:31:36.608283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.608310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.272 [2024-07-10 14:31:36.608332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.608354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.272 [2024-07-10 14:31:36.608375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.608398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.272 [2024-07-10 14:31:36.608444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.608482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.272 [2024-07-10 14:31:36.608504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.608527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.272 [2024-07-10 14:31:36.608547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.608570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.272 [2024-07-10 14:31:36.608591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.608614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.272 [2024-07-10 14:31:36.608635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.272 [2024-07-10 14:31:36.608658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.272 [2024-07-10 14:31:36.608689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.273 [2024-07-10 14:31:36.608727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.273 [2024-07-10 14:31:36.608749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.273 [2024-07-10 14:31:36.608771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.273 [2024-07-10 14:31:36.608792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.273 [2024-07-10 14:31:36.608795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.608814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.273 [2024-07-10 14:31:36.608830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.608835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.273 [2024-07-10 14:31:36.608850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.608858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.273 [2024-07-10 14:31:36.608873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.608878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.273 [2024-07-10 14:31:36.608891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.608901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.273 [2024-07-10 14:31:36.608908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.608922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.273 [2024-07-10 14:31:36.608926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.608942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.608944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.273 [2024-07-10 14:31:36.608959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.608964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.273 [2024-07-10 14:31:36.608977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.608987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.273 [2024-07-10 14:31:36.608994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.609008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.273 [2024-07-10 14:31:36.609012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.609029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.609032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.273 [2024-07-10 14:31:36.609046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.609052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.273 [2024-07-10 14:31:36.609063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.609075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.273 [2024-07-10 14:31:36.609081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.609095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.273 [2024-07-10 14:31:36.609098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.609118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:1[2024-07-10 14:31:36.609119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.273 with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.609140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.609147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.273 [2024-07-10 14:31:36.609158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.609172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1[2024-07-10 14:31:36.609175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.273 with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.609194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same [2024-07-10 14:31:36.609195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:27.273 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.273 [2024-07-10 14:31:36.609213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.609220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.273 [2024-07-10 14:31:36.609231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.609241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.273 [2024-07-10 14:31:36.609249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.609264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.273 [2024-07-10 14:31:36.609267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.609284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-10 14:31:36.609286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.273 with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.609305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.609309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.273 [2024-07-10 14:31:36.609322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.609330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.273 [2024-07-10 14:31:36.609339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.609356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.273 [2024-07-10 14:31:36.609353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.273 [2024-07-10 14:31:36.609380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same [2024-07-10 14:31:36.609384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:27.273 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.609406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.609431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.609468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:1[2024-07-10 14:31:36.609487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same [2024-07-10 14:31:36.609507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:27.274 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.609526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.609543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.609561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same [2024-07-10 14:31:36.609577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:1with the state(5) to be set 00:29:27.274 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.609599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.609616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.609634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.609651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.609689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.609708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.609726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.609760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.609794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-10 14:31:36.609811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.609846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.609862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:1[2024-07-10 14:31:36.609879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same [2024-07-10 14:31:36.609901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:27.274 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.609919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.609936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-10 14:31:36.609952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.609983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.610005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.609994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:27.274 [2024-07-10 14:31:36.610028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.610049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.610072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.610092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.610115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.610161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.610186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.610207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.610230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.610251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.610275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.610296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.610319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.610340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.610362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.610383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.610406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.610434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.610484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.610520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.610550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.610572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.610595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.610628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.610653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.610676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.610709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.274 [2024-07-10 14:31:36.610730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.274 [2024-07-10 14:31:36.610752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.275 [2024-07-10 14:31:36.610778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.275 [2024-07-10 14:31:36.610803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.275 [2024-07-10 14:31:36.610840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.275 [2024-07-10 14:31:36.610864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.275 [2024-07-10 14:31:36.610884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.275 [2024-07-10 14:31:36.611180] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f8900 was disconnected and freed. reset controller. 00:29:27.275 [2024-07-10 14:31:36.613033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:27.275 [2024-07-10 14:31:36.613726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same [2024-07-10 14:31:36.613768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 with the state(5) to be set 00:29:27.275 (9): Bad file descriptor 00:29:27.275 [2024-07-10 14:31:36.613795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.613993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.614021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.614038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.614056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.614073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.614120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.614141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.614177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.614199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.614253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.614280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.614298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.614315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.614333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.614351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.614368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.614390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.614416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.614447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.615165] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:27.275 [2024-07-10 14:31:36.615693] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:27.275 [2024-07-10 14:31:36.615909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-10 14:31:36.615957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2c80 with addr=10.0.0.2, port=4420 00:29:27.275 [2024-07-10 14:31:36.615982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:29:27.275 [2024-07-10 14:31:36.616558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:29:27.275 [2024-07-10 14:31:36.616622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:29:27.275 [2024-07-10 14:31:36.616707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:29:27.275 [2024-07-10 14:31:36.616797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.275 [2024-07-10 14:31:36.616826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.275 [2024-07-10 14:31:36.616851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.275 [2024-07-10 14:31:36.616872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.275 [2024-07-10 14:31:36.616893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.275 [2024-07-10 14:31:36.616912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.275 [2024-07-10 14:31:36.616933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.275 [2024-07-10 14:31:36.616953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.275 [2024-07-10 14:31:36.616972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.617037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.276 [2024-07-10 14:31:36.617065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 [2024-07-10 14:31:36.617086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.276 [2024-07-10 14:31:36.617110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 [2024-07-10 14:31:36.617133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.276 [2024-07-10 14:31:36.617153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 [2024-07-10 14:31:36.617174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.276 [2024-07-10 14:31:36.617201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 [2024-07-10 14:31:36.617225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4a80 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.617269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:29:27.276 [2024-07-10 14:31:36.617314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:27.276 [2024-07-10 14:31:36.617416] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:27.276 [2024-07-10 14:31:36.617558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.276 [2024-07-10 14:31:36.617596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 [2024-07-10 14:31:36.617641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.276 [2024-07-10 14:31:36.617668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 [2024-07-10 14:31:36.617706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.276 [2024-07-10 14:31:36.617728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 [2024-07-10 14:31:36.617752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.276 [2024-07-10 14:31:36.617775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 [2024-07-10 14:31:36.617799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1[2024-07-10 14:31:36.617789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.276 with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.617824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 [2024-07-10 14:31:36.617828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.617850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same [2024-07-10 14:31:36.617849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128with the state(5) to be set 00:29:27.276 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.276 [2024-07-10 14:31:36.617871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.617874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 [2024-07-10 14:31:36.617887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.617899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.276 [2024-07-10 14:31:36.617905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.617921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 [2024-07-10 14:31:36.617923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.617942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same [2024-07-10 14:31:36.617945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128with the state(5) to be set 00:29:27.276 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.276 [2024-07-10 14:31:36.617983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.617986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 [2024-07-10 14:31:36.618001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.276 [2024-07-10 14:31:36.618018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-10 14:31:36.618035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.276 [2024-07-10 14:31:36.618069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 [2024-07-10 14:31:36.618086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same [2024-07-10 14:31:36.618103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128with the state(5) to be set 00:29:27.276 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.276 [2024-07-10 14:31:36.618123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 [2024-07-10 14:31:36.618140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.276 [2024-07-10 14:31:36.618157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 [2024-07-10 14:31:36.618174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.276 [2024-07-10 14:31:36.618208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 [2024-07-10 14:31:36.618228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.276 [2024-07-10 14:31:36.618263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 [2024-07-10 14:31:36.618281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.276 [2024-07-10 14:31:36.618299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-10 14:31:36.618317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.276 [2024-07-10 14:31:36.618354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 [2024-07-10 14:31:36.618372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:12[2024-07-10 14:31:36.618389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.276 with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same [2024-07-10 14:31:36.618409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:27.276 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 [2024-07-10 14:31:36.618452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.276 [2024-07-10 14:31:36.618483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 [2024-07-10 14:31:36.618502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.276 [2024-07-10 14:31:36.618521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.276 [2024-07-10 14:31:36.618539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.276 [2024-07-10 14:31:36.618544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.277 [2024-07-10 14:31:36.618562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.277 [2024-07-10 14:31:36.618564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.277 [2024-07-10 14:31:36.618580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.277 [2024-07-10 14:31:36.618586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.277 [2024-07-10 14:31:36.618598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.277 [2024-07-10 14:31:36.618611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.277 [2024-07-10 14:31:36.618616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.277 [2024-07-10 14:31:36.618632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-10 14:31:36.618634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.277 with the state(5) to be set 00:29:27.277 [2024-07-10 14:31:36.618654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.277 [2024-07-10 14:31:36.618659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.277 [2024-07-10 14:31:36.618671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.277 [2024-07-10 14:31:36.618693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.277 [2024-07-10 14:31:36.618698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.277 [2024-07-10 14:31:36.618717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.277 [2024-07-10 14:31:36.618718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.277 [2024-07-10 14:31:36.618734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.277 [2024-07-10 14:31:36.618740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.277 [2024-07-10 14:31:36.618769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.277 [2024-07-10 14:31:36.618781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.277 [2024-07-10 14:31:36.618786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.277 [2024-07-10 14:31:36.618802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-10 14:31:36.618804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.277 with the state(5) to be set 00:29:27.277 [2024-07-10 14:31:36.618826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.277 [2024-07-10 14:31:36.618828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.277 [2024-07-10 14:31:36.618855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.277 [2024-07-10 14:31:36.618857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.277 [2024-07-10 14:31:36.618872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.277 [2024-07-10 14:31:36.618881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.277 [2024-07-10 14:31:36.618889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.277 [2024-07-10 14:31:36.618903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.277 [2024-07-10 14:31:36.618907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.277 [2024-07-10 14:31:36.618924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.278 [2024-07-10 14:31:36.618927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.618941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.278 [2024-07-10 14:31:36.618949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.618959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.278 [2024-07-10 14:31:36.618972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.618977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.278 [2024-07-10 14:31:36.618994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same [2024-07-10 14:31:36.618994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:27.278 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.619013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.278 [2024-07-10 14:31:36.619020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.619042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.619030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:27.278 [2024-07-10 14:31:36.619066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.619088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.619112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.619134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.619162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.619183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.619207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.619228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.619251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.619272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.619296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.619317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.619341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.619362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.619386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.619421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.619456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.619479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.619504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.619527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.619552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.619573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.619598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.619619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.619643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.619664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.619688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.619723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.619765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.619791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.619815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.619836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.619859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.619880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.619918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.619949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.619975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.619997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.620021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.620042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.620065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.620086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.620109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.620130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.620153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.620174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.620197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.620218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.620241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.620262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.620285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.620306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.620329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.620350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.620378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.620400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.620432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.620482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.620508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.620531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.620555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.620577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.620602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.278 [2024-07-10 14:31:36.620623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.278 [2024-07-10 14:31:36.620648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.279 [2024-07-10 14:31:36.620671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.279 [2024-07-10 14:31:36.620706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.279 [2024-07-10 14:31:36.620728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.279 [2024-07-10 14:31:36.620753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.279 [2024-07-10 14:31:36.620783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.279 [2024-07-10 14:31:36.620804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8e00 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621092] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f8e00 was disconnected and freed. reset controller. 00:29:27.279 [2024-07-10 14:31:36.621203] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:27.279 [2024-07-10 14:31:36.621318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error [2024-07-10 14:31:36.621609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same state 00:29:27.279 with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:27.279 [2024-07-10 14:31:36.621644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:27.279 [2024-07-10 14:31:36.621661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.621986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.622491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.623312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.279 [2024-07-10 14:31:36.623348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:27.279 [2024-07-10 14:31:36.623514] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:27.279 [2024-07-10 14:31:36.623902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.623938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.623941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-10 14:31:36.623959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.623977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.623979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3b80 with addr=10.0.0.2, port=4420 00:29:27.279 [2024-07-10 14:31:36.623995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.624003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.624012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.624030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.624047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.279 [2024-07-10 14:31:36.624064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:29:27.280 [2024-07-10 14:31:36.624843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.624993] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:27.280 [2024-07-10 14:31:36.625009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.625041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.625073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.625109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.625141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.625174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.625207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.625239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.625272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.625304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.625337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.625359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.625377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.625393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.625411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.625437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.626040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:27.280 [2024-07-10 14:31:36.626070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:27.280 [2024-07-10 14:31:36.626092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:27.280 [2024-07-10 14:31:36.626256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.280 [2024-07-10 14:31:36.626288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.280 [2024-07-10 14:31:36.626322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.280 [2024-07-10 14:31:36.626346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.280 [2024-07-10 14:31:36.626370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9800 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.626668] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f9800 was disconnected and freed. reset controller. 00:29:27.280 [2024-07-10 14:31:36.626826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:27.280 [2024-07-10 14:31:36.626862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.280 [2024-07-10 14:31:36.626952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.280 [2024-07-10 14:31:36.626982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.280 [2024-07-10 14:31:36.627014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.280 [2024-07-10 14:31:36.627036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.280 [2024-07-10 14:31:36.627058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.280 [2024-07-10 14:31:36.627078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.280 [2024-07-10 14:31:36.627100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.280 [2024-07-10 14:31:36.627120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.280 [2024-07-10 14:31:36.627140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5200 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.627205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.280 [2024-07-10 14:31:36.627233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.280 [2024-07-10 14:31:36.627256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.280 [2024-07-10 14:31:36.627277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.280 [2024-07-10 14:31:36.627298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.280 [2024-07-10 14:31:36.627318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.280 [2024-07-10 14:31:36.627339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.280 [2024-07-10 14:31:36.627359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.280 [2024-07-10 14:31:36.627378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(5) to be set 00:29:27.280 [2024-07-10 14:31:36.627483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.280 [2024-07-10 14:31:36.627513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.627536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.281 [2024-07-10 14:31:36.627557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.627578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.281 [2024-07-10 14:31:36.627599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.627620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.281 [2024-07-10 14:31:36.627641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.627659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5980 is same with the state(5) to be set 00:29:27.281 [2024-07-10 14:31:36.627701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:27.281 [2024-07-10 14:31:36.627746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4a80 (9): Bad file descriptor 00:29:27.281 [2024-07-10 14:31:36.628881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:29:27.281 [2024-07-10 14:31:36.628933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5980 (9): Bad file descriptor 00:29:27.281 [2024-07-10 14:31:36.629101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-10 14:31:36.629138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2c80 with addr=10.0.0.2, port=4420 00:29:27.281 [2024-07-10 14:31:36.629160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:29:27.281 [2024-07-10 14:31:36.629224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.629260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.629292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.629316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.629341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.629363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.629387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.629408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.629442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.629480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.629511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.629533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.629557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.629578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.629602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.629624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.629648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.629669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.629703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.629724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.629765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.629786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.629809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.629829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.629852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.629873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.629912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.629935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.629958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.629978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.630001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.630022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.630045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.630066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.630090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.630115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.630139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.630161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.630185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.630205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.630228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.630249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.630272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.630293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.630316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.630336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.630359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.630380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.630403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.630432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.630487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.630509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.630532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.630554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.630578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.630599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.630622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.630643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.630667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.630698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.630727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.630749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.630773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.630794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.630816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.630837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.630861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.281 [2024-07-10 14:31:36.630882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.281 [2024-07-10 14:31:36.630906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.630927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.630951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.630972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.631010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.631033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.631057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.631078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.631102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.631123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.631148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.631169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.631193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.631213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.631237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.631258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.631282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.631307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.631332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.631353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.631377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.631398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.631449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.631480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.631505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.631526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.631550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.631572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.631595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.631616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.631640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.631663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.631695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.631716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.631756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.631778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.631801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.631829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.631853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.631874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.631897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.631917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.631944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.631965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.631988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.632010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.632033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.632053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.632075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.632095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.632119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.632140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.632163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.632183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.632207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.632227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.632252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.632272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.632295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.632316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.632336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8680 is same with the state(5) to be set 00:29:27.282 [2024-07-10 14:31:36.633952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.633986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.634018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.634041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.634066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.634088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.634111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.282 [2024-07-10 14:31:36.634138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.282 [2024-07-10 14:31:36.634168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.634192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.634216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.634238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.634261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.634282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.634306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.634327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.634351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.634373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.634396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.634418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.634478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.634503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.634527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.634549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.634572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.634607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.634634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.634656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.634691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.634713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.634736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.634757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.634786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.634809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.634833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.634854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.634878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.634900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.634924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.634946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.634975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.634997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.635021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.635043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.635067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.635088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.635112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.635134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.635158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.635179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.635203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.635224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.635248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.635270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.635294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.635315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.635339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.635370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.635396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.635419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.635453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.635485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.635509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.635532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.635556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.635578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.635602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.635623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.635648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.635670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.635701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.635722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.635752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.635776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.635800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.635822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.635846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.635868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.635891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.635913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.635938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.635960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.635988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.636011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.636036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.636060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.636084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.636106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.636130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.636154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.636178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.283 [2024-07-10 14:31:36.636200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.283 [2024-07-10 14:31:36.636225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.636246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.636271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.636293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.636317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.636339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.636368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.636390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.636414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.636520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.636549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.636573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.636598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.636621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.636646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.636673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.636698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.636721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.636745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.636768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.636792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.636814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.636838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.636860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.636884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.636907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.636931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.636953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.636978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.637001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.637025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.637047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.637072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.637093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.637128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.637149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.637171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8b80 is same with the state(5) to be set 00:29:27.284 [2024-07-10 14:31:36.639057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.639093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.639129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.639159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.639185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.639208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.639233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.639255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.639280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.639302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.639327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.639350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.639375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.639397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.639420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.639456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.639493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.639515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.639539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.639562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.639586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.639608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.639632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.639654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.639757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.639782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.639807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.639829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.639859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.639882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.639907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.639930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.639955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.639979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.640004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.640027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.640052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.640075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.640100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.640122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.640147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.640169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.640194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.640216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.640240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-10 14:31:36.640262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.284 [2024-07-10 14:31:36.640286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.640307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.640331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.640353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.640378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.640399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.640423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.640460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.640492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.640514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.640538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.640559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.640583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.640605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.640629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.640651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.640686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.640707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.640731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.640753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.640777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.640799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.640823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.640844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.640869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.640890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.640915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.640936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.640960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.640982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.641005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.641028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.641056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.641078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.641102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.641124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.641149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.641170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.641195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.641217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.641241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.641263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.641287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.641308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.641332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.641354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.641378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.641401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.641431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.641455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.641487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.641509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.641532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.641554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.641580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.641603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.641627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.641655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.641692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.641714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.641738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.641759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.641784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.641807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.641831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.641853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.641878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.641900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.641925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.641947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.641972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.641994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.642019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.642043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.642067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.642089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.642114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.642137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.642162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.642184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.642209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.285 [2024-07-10 14:31:36.642231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.285 [2024-07-10 14:31:36.642258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9d00 is same with the state(5) to be set 00:29:27.285 [2024-07-10 14:31:36.647265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.285 [2024-07-10 14:31:36.647328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:27.285 [2024-07-10 14:31:36.647354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:27.286 [2024-07-10 14:31:36.647485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:29:27.286 [2024-07-10 14:31:36.647589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5200 (9): Bad file descriptor 00:29:27.286 [2024-07-10 14:31:36.647650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:27.286 [2024-07-10 14:31:36.647735] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:27.286 [2024-07-10 14:31:36.648853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.286 [2024-07-10 14:31:36.648896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5980 with addr=10.0.0.2, port=4420 00:29:27.286 [2024-07-10 14:31:36.648923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5980 is same with the state(5) to be set 00:29:27.286 [2024-07-10 14:31:36.649141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.286 [2024-07-10 14:31:36.649174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:27.286 [2024-07-10 14:31:36.649198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:29:27.286 [2024-07-10 14:31:36.649346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.286 [2024-07-10 14:31:36.649380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=4420 00:29:27.286 [2024-07-10 14:31:36.649403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is same with the state(5) to be set 00:29:27.286 [2024-07-10 14:31:36.649571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.286 [2024-07-10 14:31:36.649605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6880 with addr=10.0.0.2, port=4420 00:29:27.286 [2024-07-10 14:31:36.649628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:29:27.286 [2024-07-10 14:31:36.649651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:27.286 [2024-07-10 14:31:36.649671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:27.286 [2024-07-10 14:31:36.649702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:27.286 [2024-07-10 14:31:36.650803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.650847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.650884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.650908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.650933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.650960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.650985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.651007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.651031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.651053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.651076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.651097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.651121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.651143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.651166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.651188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.651211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.651233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.651256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.651277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.651300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.651322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.651345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.651367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.651391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.651436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.651465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.651494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.651518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.651541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.651569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.651593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.651617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.651640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.651664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.651694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.651719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.651756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.651780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.651802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.651825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.651846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.651870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.651892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.651916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.651937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.651961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.651983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.652006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.652027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.652051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.286 [2024-07-10 14:31:36.652071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.286 [2024-07-10 14:31:36.652095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.652117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.652140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.652166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.652190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.652211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.652234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.652255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.652279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.652299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.652322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.652342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.652365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.652386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.652408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.652456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.652495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.652517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.652541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.652562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.652586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.652607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.652631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.652652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.652687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.652709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.652732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.652769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.652793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.652818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.652842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.652863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.652887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.652907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.652930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.652951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.652975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.652995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.653018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.653039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.653062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.653083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.653106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.653127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.653151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.653172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.653195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.653216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.653239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.653260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.653282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.653303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.653326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.653347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.653374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.653396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.653419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.653474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.653500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.653522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.653546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.653567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.653591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.653614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.653638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.653660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.653684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.653706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.653730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.653752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.653792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.653813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.653837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.653858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.653881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.653901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.653922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9080 is same with the state(5) to be set 00:29:27.287 [2024-07-10 14:31:36.655512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.655543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.655609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.655638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.655665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.655688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.655712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.287 [2024-07-10 14:31:36.655750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.287 [2024-07-10 14:31:36.655776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.655798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.655821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.655842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.655866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.655887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.655911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.655932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.655956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.655976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.656000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.656021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.656045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.656065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.656089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.656110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.656133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.656155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.656179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.656204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.656228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.656249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.656273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.656294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.656317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.656338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.656361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.656383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.656407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.656435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.656487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.656509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.656534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.656556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.656580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.656601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.656625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.656647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.656670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.656692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.656716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.656737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.656778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.656799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.656827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.656849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.656873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.656894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.656917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.656939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.656962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.656984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.657007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.657029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.657052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.657073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.657096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.657117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.657141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.657162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.657185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.657206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.657229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.657250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.657273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.657294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.657317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.657338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.657361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.657387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.657435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.657460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.657485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.657507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.657531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.657553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.657576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.657598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.657623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.657645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.657669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.657691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.657714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.657736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.288 [2024-07-10 14:31:36.657759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.288 [2024-07-10 14:31:36.657781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.657821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.657844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.657868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.657888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.657911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.657932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.657954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.657975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.658002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.658024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.658046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.658067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.658090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.658112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.658135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.658156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.658179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.658200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.658223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.658244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.658268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.658289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.658312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.658333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.658356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.658376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.658415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.658446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.658472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.658494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.658519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.658540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.658563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.658589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.658624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9300 is same with the state(5) to be set 00:29:27.289 [2024-07-10 14:31:36.660730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:27.289 [2024-07-10 14:31:36.660773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.289 [2024-07-10 14:31:36.660796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:29:27.289 [2024-07-10 14:31:36.660830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:27.289 [2024-07-10 14:31:36.660899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5980 (9): Bad file descriptor 00:29:27.289 [2024-07-10 14:31:36.660933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:27.289 [2024-07-10 14:31:36.660961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:29:27.289 [2024-07-10 14:31:36.660988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:29:27.289 [2024-07-10 14:31:36.661062] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:27.289 [2024-07-10 14:31:36.661093] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:27.289 [2024-07-10 14:31:36.661120] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:27.289 [2024-07-10 14:31:36.661145] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:27.289 [2024-07-10 14:31:36.661326] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:27.289 [2024-07-10 14:31:36.661605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.289 [2024-07-10 14:31:36.661644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3b80 with addr=10.0.0.2, port=4420 00:29:27.289 [2024-07-10 14:31:36.661667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:29:27.289 [2024-07-10 14:31:36.661823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.289 [2024-07-10 14:31:36.661859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:27.289 [2024-07-10 14:31:36.661882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:29:27.289 [2024-07-10 14:31:36.662067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.289 [2024-07-10 14:31:36.662101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4a80 with addr=10.0.0.2, port=4420 00:29:27.289 [2024-07-10 14:31:36.662134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4a80 is same with the state(5) to be set 00:29:27.289 [2024-07-10 14:31:36.662155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:29:27.289 [2024-07-10 14:31:36.662175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:29:27.289 [2024-07-10 14:31:36.662194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:29:27.289 [2024-07-10 14:31:36.662222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.289 [2024-07-10 14:31:36.662244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.289 [2024-07-10 14:31:36.662267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.289 [2024-07-10 14:31:36.662296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:27.289 [2024-07-10 14:31:36.662316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:27.289 [2024-07-10 14:31:36.662350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:27.289 [2024-07-10 14:31:36.662383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:29:27.289 [2024-07-10 14:31:36.662405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:29:27.289 [2024-07-10 14:31:36.662423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:27.289 [2024-07-10 14:31:36.663465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.663498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.663532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.663556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.663581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.663604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.663627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.663649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.663673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.663695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.663720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.663757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.663781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.663803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.663826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.663848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.663872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.663893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.663915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.289 [2024-07-10 14:31:36.663941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.289 [2024-07-10 14:31:36.663966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.663987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.664011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.664033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.664056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.664087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.664110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.664131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.664155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.664176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.664199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.664220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.664242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.664264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.664287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.664308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.664331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.664352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.664375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.664397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.664420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.664476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.664503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.664526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.664549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.664575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.664601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.664623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.664647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.664669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.664693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.664715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.664745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.664782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.664806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.664827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.664851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.664872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.664896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.664917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.664940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.664961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.664984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.665005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.665028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.665049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.665071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.665093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.665116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.665136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.665163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.665185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.665208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.665229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.665252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.665273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.665296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.665318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.665340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.665361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.665384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.665405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.665452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.665486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.665511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.665533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.665558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.665580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.665604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.665626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.665649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.290 [2024-07-10 14:31:36.665682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.290 [2024-07-10 14:31:36.665705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.665740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.665781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.665807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.665832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.665853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.665876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.665898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.665920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.665941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.665963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.665984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.666007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.666027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.666050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.666070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.666093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.666114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.666136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.666157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.666179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.666199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.666222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.666243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.666265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.666286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.666309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.666330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.666356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.666378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.666400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.666421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.666481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.666505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.666529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.666550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.666571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9580 is same with the state(5) to be set 00:29:27.291 [2024-07-10 14:31:36.668139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.668172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.668204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.668228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.668254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.668276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.668300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.668323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.668347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.668368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.668392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.668414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.668449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.668473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.668503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.668525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.668554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.668577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.668601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.668622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.668646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.668667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.668701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.668723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.668761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.668784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.668808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.668829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.668852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.668874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.668897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.668917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.668940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.668960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.668983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.669003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.669026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.669047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.669070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.669091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.669114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.669139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.669164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.669186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.669209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.669230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.291 [2024-07-10 14:31:36.669254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.291 [2024-07-10 14:31:36.669274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.669297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.669318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.669341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.669362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.669385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.669405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.669454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.669487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.669513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.669535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.669559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.669580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.669604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.669626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.669650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.669683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.669706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.669727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.669769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.669792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.669817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.669837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.669860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.669881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.669904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.669925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.669947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.669967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.670003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.670037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.670066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.670088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.670111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.670132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.670155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.670176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.670199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.670219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.670242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.670262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.670285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.670306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.670328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.670354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.670392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.670415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.670465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.670497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.670520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.670541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.670565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.670586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.670609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.670630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.670653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.670675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.670705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.670727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.670750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.670787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.670811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.670832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.670854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.670875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.670897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.670919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.670942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.670962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.670989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.671011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.671033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.671054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.671077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.671098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.671120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.671141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.671163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.671184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.671207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.292 [2024-07-10 14:31:36.671226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.292 [2024-07-10 14:31:36.671247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9a80 is same with the state(5) to be set 00:29:27.292 [2024-07-10 14:31:36.675866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:27.292 [2024-07-10 14:31:36.675906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.292 [2024-07-10 14:31:36.675927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.292 [2024-07-10 14:31:36.675943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.292 [2024-07-10 14:31:36.675958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.292 [2024-07-10 14:31:36.675981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:27.293 task offset: 24576 on job bdev=Nvme2n1 fails 00:29:27.293 00:29:27.293 Latency(us) 00:29:27.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.293 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.293 Job: Nvme1n1 ended in about 1.09 seconds with error 00:29:27.293 Verification LBA range: start 0x0 length 0x400 00:29:27.293 Nvme1n1 : 1.09 117.01 7.31 58.51 0.00 361052.79 46797.56 282727.16 00:29:27.293 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.293 Job: Nvme2n1 ended in about 1.07 seconds with error 00:29:27.293 Verification LBA range: start 0x0 length 0x400 00:29:27.293 Nvme2n1 : 1.07 178.86 11.18 59.62 0.00 260570.07 6796.33 302921.96 00:29:27.293 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.293 Job: Nvme3n1 ended in about 1.10 seconds with error 00:29:27.293 Verification LBA range: start 0x0 length 0x400 00:29:27.293 Nvme3n1 : 1.10 174.75 10.92 58.25 0.00 261882.12 20291.89 301368.51 00:29:27.293 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.293 Job: Nvme4n1 ended in about 1.08 seconds with error 00:29:27.293 Verification LBA range: start 0x0 length 0x400 00:29:27.293 Nvme4n1 : 1.08 177.24 11.08 59.08 0.00 253043.48 16408.27 306028.85 00:29:27.293 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.293 Job: Nvme5n1 ended in about 1.12 seconds with error 00:29:27.293 Verification LBA range: start 0x0 length 0x400 00:29:27.293 Nvme5n1 : 1.12 114.76 7.17 57.38 0.00 341612.97 29515.47 326223.64 00:29:27.293 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.293 Job: Nvme6n1 ended in about 1.12 seconds with error 00:29:27.293 Verification LBA range: start 0x0 length 0x400 00:29:27.293 Nvme6n1 : 1.12 114.28 7.14 57.14 0.00 336545.56 25631.86 312242.63 00:29:27.293 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.293 Job: Nvme7n1 ended in about 1.13 seconds with error 00:29:27.293 Verification LBA range: start 0x0 length 0x400 00:29:27.293 Nvme7n1 : 1.13 170.21 10.64 56.74 0.00 249394.44 22427.88 302921.96 00:29:27.293 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.293 Job: Nvme8n1 ended in about 1.09 seconds with error 00:29:27.293 Verification LBA range: start 0x0 length 0x400 00:29:27.293 Nvme8n1 : 1.09 174.46 10.90 1.84 0.00 312365.01 24078.41 312242.63 00:29:27.293 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.293 Job: Nvme9n1 ended in about 1.13 seconds with error 00:29:27.293 Verification LBA range: start 0x0 length 0x400 00:29:27.293 Nvme9n1 : 1.13 113.01 7.06 56.50 0.00 321039.42 28156.21 340204.66 00:29:27.293 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.293 Job: Nvme10n1 ended in about 1.10 seconds with error 00:29:27.293 Verification LBA range: start 0x0 length 0x400 00:29:27.293 Nvme10n1 : 1.10 119.59 7.47 57.98 0.00 298692.83 24563.86 318456.41 00:29:27.293 =================================================================================================================== 00:29:27.293 Total : 1454.17 90.89 523.03 0.00 294521.97 6796.33 340204.66 00:29:27.552 [2024-07-10 14:31:36.760322] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:27.552 [2024-07-10 14:31:36.760554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:29:27.552 [2024-07-10 14:31:36.760603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:27.552 [2024-07-10 14:31:36.760634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4a80 (9): Bad file descriptor 00:29:27.552 [2024-07-10 14:31:36.760709] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:27.552 [2024-07-10 14:31:36.760805] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:27.552 [2024-07-10 14:31:36.760836] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:27.552 [2024-07-10 14:31:36.760864] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:27.552 [2024-07-10 14:31:36.761094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:29:27.552 [2024-07-10 14:31:36.761528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.552 [2024-07-10 14:31:36.761575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2c80 with addr=10.0.0.2, port=4420 00:29:27.552 [2024-07-10 14:31:36.761603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:29:27.552 [2024-07-10 14:31:36.761832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.552 [2024-07-10 14:31:36.761869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5200 with addr=10.0.0.2, port=4420 00:29:27.552 [2024-07-10 14:31:36.761900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5200 is same with the state(5) to be set 00:29:27.552 [2024-07-10 14:31:36.761924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:27.552 [2024-07-10 14:31:36.761944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:27.552 [2024-07-10 14:31:36.761967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:27.552 [2024-07-10 14:31:36.761999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:29:27.552 [2024-07-10 14:31:36.762020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:29:27.552 [2024-07-10 14:31:36.762039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:29:27.552 [2024-07-10 14:31:36.762066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:29:27.552 [2024-07-10 14:31:36.762102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:29:27.552 [2024-07-10 14:31:36.762122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:27.552 [2024-07-10 14:31:36.762170] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:27.552 [2024-07-10 14:31:36.762201] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:27.552 [2024-07-10 14:31:36.762226] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:27.552 [2024-07-10 14:31:36.762254] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:27.552 [2024-07-10 14:31:36.762280] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:27.552 [2024-07-10 14:31:36.762305] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:27.552 [2024-07-10 14:31:36.762330] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:27.552 [2024-07-10 14:31:36.763458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:27.552 [2024-07-10 14:31:36.763498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:27.552 [2024-07-10 14:31:36.763524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.553 [2024-07-10 14:31:36.763557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:29:27.553 [2024-07-10 14:31:36.763628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.553 [2024-07-10 14:31:36.763655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.553 [2024-07-10 14:31:36.763672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.553 [2024-07-10 14:31:36.763918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.553 [2024-07-10 14:31:36.763954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6100 with addr=10.0.0.2, port=4420 00:29:27.553 [2024-07-10 14:31:36.763978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(5) to be set 00:29:27.553 [2024-07-10 14:31:36.764006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:29:27.553 [2024-07-10 14:31:36.764035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5200 (9): Bad file descriptor 00:29:27.553 [2024-07-10 14:31:36.764358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.553 [2024-07-10 14:31:36.764412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6880 with addr=10.0.0.2, port=4420 00:29:27.553 [2024-07-10 14:31:36.764444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:29:27.553 [2024-07-10 14:31:36.764587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.553 [2024-07-10 14:31:36.764621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=4420 00:29:27.553 [2024-07-10 14:31:36.764643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is same with the state(5) to be set 00:29:27.553 [2024-07-10 14:31:36.764822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.553 [2024-07-10 14:31:36.764855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:27.553 [2024-07-10 14:31:36.764877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:29:27.553 [2024-07-10 14:31:36.765028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.553 [2024-07-10 14:31:36.765061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5980 with addr=10.0.0.2, port=4420 00:29:27.553 [2024-07-10 14:31:36.765083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5980 is same with the state(5) to be set 00:29:27.553 [2024-07-10 14:31:36.765109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:27.553 [2024-07-10 14:31:36.765134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:27.553 [2024-07-10 14:31:36.765154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:27.553 [2024-07-10 14:31:36.765173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:27.553 [2024-07-10 14:31:36.765203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:29:27.553 [2024-07-10 14:31:36.765225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:29:27.553 [2024-07-10 14:31:36.765243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:27.553 [2024-07-10 14:31:36.765344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.553 [2024-07-10 14:31:36.765371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.553 [2024-07-10 14:31:36.765395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:29:27.553 [2024-07-10 14:31:36.765449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:29:27.553 [2024-07-10 14:31:36.765479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:27.553 [2024-07-10 14:31:36.765508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5980 (9): Bad file descriptor 00:29:27.553 [2024-07-10 14:31:36.765532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:29:27.553 [2024-07-10 14:31:36.765550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:29:27.553 [2024-07-10 14:31:36.765569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:29:27.553 [2024-07-10 14:31:36.765657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.553 [2024-07-10 14:31:36.765685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:29:27.553 [2024-07-10 14:31:36.765704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:29:27.553 [2024-07-10 14:31:36.765744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:27.553 [2024-07-10 14:31:36.765772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:27.553 [2024-07-10 14:31:36.765793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:27.553 [2024-07-10 14:31:36.765812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:27.553 [2024-07-10 14:31:36.765836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.553 [2024-07-10 14:31:36.765855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.553 [2024-07-10 14:31:36.765872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.553 [2024-07-10 14:31:36.765896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:29:27.553 [2024-07-10 14:31:36.765916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:29:27.553 [2024-07-10 14:31:36.765933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:29:27.553 [2024-07-10 14:31:36.765990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.553 [2024-07-10 14:31:36.766027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.553 [2024-07-10 14:31:36.766045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.553 [2024-07-10 14:31:36.766061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.836 14:31:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:29:30.836 14:31:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:29:31.094 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1476161 00:29:31.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1476161) - No such process 00:29:31.094 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:29:31.094 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:29:31.094 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:31.094 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:31.353 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:31.353 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:31.353 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:31.353 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:29:31.353 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:31.353 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:29:31.353 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:31.353 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:31.353 rmmod nvme_tcp 00:29:31.353 rmmod nvme_fabrics 00:29:31.353 rmmod nvme_keyring 00:29:31.353 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:31.353 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:29:31.353 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:29:31.353 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:29:31.353 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:31.353 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:31.353 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:31.353 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:31.353 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:31.353 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.353 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:31.353 14:31:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.290 14:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:33.290 00:29:33.290 real 0m11.831s 00:29:33.290 user 0m34.487s 00:29:33.290 sys 0m2.162s 00:29:33.290 14:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:33.290 14:31:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:33.290 ************************************ 00:29:33.290 END TEST nvmf_shutdown_tc3 00:29:33.290 ************************************ 00:29:33.290 14:31:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:29:33.290 14:31:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:29:33.290 00:29:33.290 real 0m43.036s 00:29:33.290 user 2m16.713s 00:29:33.290 sys 0m8.381s 00:29:33.290 14:31:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:33.290 14:31:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:33.290 ************************************ 00:29:33.290 END TEST nvmf_shutdown 00:29:33.290 ************************************ 00:29:33.290 14:31:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:33.290 14:31:42 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:29:33.290 14:31:42 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:33.290 14:31:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:33.290 14:31:42 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:29:33.290 14:31:42 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:33.290 14:31:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:33.290 14:31:42 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:29:33.290 14:31:42 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:33.290 14:31:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:33.290 14:31:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:33.290 14:31:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:33.290 ************************************ 00:29:33.290 START TEST nvmf_multicontroller 00:29:33.290 ************************************ 00:29:33.290 14:31:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:33.548 * Looking for test storage... 00:29:33.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:29:33.548 14:31:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:35.446 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:35.446 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:35.446 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:35.446 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:35.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:35.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:29:35.446 00:29:35.446 --- 10.0.0.2 ping statistics --- 00:29:35.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.446 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:35.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:35.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:29:35.446 00:29:35.446 --- 10.0.0.1 ping statistics --- 00:29:35.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.446 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1479054 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1479054 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1479054 ']' 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:35.446 14:31:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.447 14:31:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:35.447 14:31:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.704 [2024-07-10 14:31:44.993583] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:29:35.704 [2024-07-10 14:31:44.993731] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.704 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.704 [2024-07-10 14:31:45.135243] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:35.961 [2024-07-10 14:31:45.383214] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.961 [2024-07-10 14:31:45.383283] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.961 [2024-07-10 14:31:45.383312] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:35.961 [2024-07-10 14:31:45.383329] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:35.961 [2024-07-10 14:31:45.383346] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.961 [2024-07-10 14:31:45.383499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:35.961 [2024-07-10 14:31:45.383581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.961 [2024-07-10 14:31:45.383591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:36.526 14:31:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:36.526 14:31:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:29:36.526 14:31:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:36.526 14:31:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:36.526 14:31:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.783 14:31:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:36.783 14:31:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:36.783 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.783 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.783 [2024-07-10 14:31:46.015799] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:36.783 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.783 14:31:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:36.783 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.783 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.783 Malloc0 00:29:36.783 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.783 14:31:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:36.783 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.783 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.783 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.783 14:31:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:36.783 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.783 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.783 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.783 14:31:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.784 [2024-07-10 14:31:46.127177] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.784 [2024-07-10 14:31:46.135046] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.784 Malloc1 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1479213 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1479213 /var/tmp/bdevperf.sock 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1479213 ']' 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:36.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:36.784 14:31:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:38.155 NVMe0n1 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.155 1 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:38.155 request: 00:29:38.155 { 00:29:38.155 "name": "NVMe0", 00:29:38.155 "trtype": "tcp", 00:29:38.155 "traddr": "10.0.0.2", 00:29:38.155 "adrfam": "ipv4", 00:29:38.155 "trsvcid": "4420", 00:29:38.155 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:38.155 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:38.155 "hostaddr": "10.0.0.2", 00:29:38.155 "hostsvcid": "60000", 00:29:38.155 "prchk_reftag": false, 00:29:38.155 "prchk_guard": false, 00:29:38.155 "hdgst": false, 00:29:38.155 "ddgst": false, 00:29:38.155 "method": "bdev_nvme_attach_controller", 00:29:38.155 "req_id": 1 00:29:38.155 } 00:29:38.155 Got JSON-RPC error response 00:29:38.155 response: 00:29:38.155 { 00:29:38.155 "code": -114, 00:29:38.155 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:38.155 } 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:38.155 request: 00:29:38.155 { 00:29:38.155 "name": "NVMe0", 00:29:38.155 "trtype": "tcp", 00:29:38.155 "traddr": "10.0.0.2", 00:29:38.155 "adrfam": "ipv4", 00:29:38.155 "trsvcid": "4420", 00:29:38.155 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:38.155 "hostaddr": "10.0.0.2", 00:29:38.155 "hostsvcid": "60000", 00:29:38.155 "prchk_reftag": false, 00:29:38.155 "prchk_guard": false, 00:29:38.155 "hdgst": false, 00:29:38.155 "ddgst": false, 00:29:38.155 "method": "bdev_nvme_attach_controller", 00:29:38.155 "req_id": 1 00:29:38.155 } 00:29:38.155 Got JSON-RPC error response 00:29:38.155 response: 00:29:38.155 { 00:29:38.155 "code": -114, 00:29:38.155 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:38.155 } 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:38.155 request: 00:29:38.155 { 00:29:38.155 "name": "NVMe0", 00:29:38.155 "trtype": "tcp", 00:29:38.155 "traddr": "10.0.0.2", 00:29:38.155 "adrfam": "ipv4", 00:29:38.155 "trsvcid": "4420", 00:29:38.155 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:38.155 "hostaddr": "10.0.0.2", 00:29:38.155 "hostsvcid": "60000", 00:29:38.155 "prchk_reftag": false, 00:29:38.155 "prchk_guard": false, 00:29:38.155 "hdgst": false, 00:29:38.155 "ddgst": false, 00:29:38.155 "multipath": "disable", 00:29:38.155 "method": "bdev_nvme_attach_controller", 00:29:38.155 "req_id": 1 00:29:38.155 } 00:29:38.155 Got JSON-RPC error response 00:29:38.155 response: 00:29:38.155 { 00:29:38.155 "code": -114, 00:29:38.155 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:29:38.155 } 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:38.155 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:38.156 request: 00:29:38.156 { 00:29:38.156 "name": "NVMe0", 00:29:38.156 "trtype": "tcp", 00:29:38.156 "traddr": "10.0.0.2", 00:29:38.156 "adrfam": "ipv4", 00:29:38.156 "trsvcid": "4420", 00:29:38.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:38.156 "hostaddr": "10.0.0.2", 00:29:38.156 "hostsvcid": "60000", 00:29:38.156 "prchk_reftag": false, 00:29:38.156 "prchk_guard": false, 00:29:38.156 "hdgst": false, 00:29:38.156 "ddgst": false, 00:29:38.156 "multipath": "failover", 00:29:38.156 "method": "bdev_nvme_attach_controller", 00:29:38.156 "req_id": 1 00:29:38.156 } 00:29:38.156 Got JSON-RPC error response 00:29:38.156 response: 00:29:38.156 { 00:29:38.156 "code": -114, 00:29:38.156 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:38.156 } 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:38.156 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.156 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:38.413 00:29:38.413 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.413 14:31:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:38.413 14:31:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:38.413 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.413 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:38.413 14:31:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.413 14:31:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:38.413 14:31:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:39.346 0 00:29:39.347 14:31:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:39.347 14:31:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.347 14:31:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:39.347 14:31:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.347 14:31:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1479213 00:29:39.347 14:31:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1479213 ']' 00:29:39.347 14:31:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1479213 00:29:39.347 14:31:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:29:39.347 14:31:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:39.347 14:31:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1479213 00:29:39.347 14:31:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:39.347 14:31:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:39.347 14:31:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1479213' 00:29:39.347 killing process with pid 1479213 00:29:39.347 14:31:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1479213 00:29:39.347 14:31:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1479213 00:29:40.720 14:31:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:40.720 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.720 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:40.720 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:29:40.721 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:40.721 [2024-07-10 14:31:46.323507] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:29:40.721 [2024-07-10 14:31:46.323683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1479213 ] 00:29:40.721 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.721 [2024-07-10 14:31:46.446243] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.721 [2024-07-10 14:31:46.682700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.721 [2024-07-10 14:31:47.639038] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 96e5b3dd-5382-476f-8c24-4299b64d3932 already exists 00:29:40.721 [2024-07-10 14:31:47.639114] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:96e5b3dd-5382-476f-8c24-4299b64d3932 alias for bdev NVMe1n1 00:29:40.721 [2024-07-10 14:31:47.639149] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:40.721 Running I/O for 1 seconds... 00:29:40.721 00:29:40.721 Latency(us) 00:29:40.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.721 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:40.721 NVMe0n1 : 1.01 11478.57 44.84 0.00 0.00 11128.61 9951.76 26020.22 00:29:40.721 =================================================================================================================== 00:29:40.721 Total : 11478.57 44.84 0.00 0.00 11128.61 9951.76 26020.22 00:29:40.721 Received shutdown signal, test time was about 1.000000 seconds 00:29:40.721 00:29:40.721 Latency(us) 00:29:40.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.721 =================================================================================================================== 00:29:40.721 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:40.721 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:40.721 rmmod nvme_tcp 00:29:40.721 rmmod nvme_fabrics 00:29:40.721 rmmod nvme_keyring 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1479054 ']' 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1479054 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1479054 ']' 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1479054 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1479054 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1479054' 00:29:40.721 killing process with pid 1479054 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1479054 00:29:40.721 14:31:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1479054 00:29:42.095 14:31:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:42.095 14:31:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:42.095 14:31:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:42.095 14:31:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:42.095 14:31:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:42.095 14:31:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.095 14:31:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:42.095 14:31:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.625 14:31:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:44.625 00:29:44.625 real 0m10.723s 00:29:44.625 user 0m21.673s 00:29:44.625 sys 0m2.617s 00:29:44.625 14:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:44.625 14:31:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.625 ************************************ 00:29:44.625 END TEST nvmf_multicontroller 00:29:44.625 ************************************ 00:29:44.625 14:31:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:44.625 14:31:53 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:44.625 14:31:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:44.625 14:31:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:44.625 14:31:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:44.625 ************************************ 00:29:44.625 START TEST nvmf_aer 00:29:44.625 ************************************ 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:44.625 * Looking for test storage... 00:29:44.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:44.625 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.626 14:31:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:44.626 14:31:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.626 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:44.626 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:44.626 14:31:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:29:44.626 14:31:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:46.526 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:46.526 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.526 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:46.527 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:46.527 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:46.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:46.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:29:46.527 00:29:46.527 --- 10.0.0.2 ping statistics --- 00:29:46.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.527 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:46.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:46.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:29:46.527 00:29:46.527 --- 10.0.0.1 ping statistics --- 00:29:46.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.527 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1481693 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1481693 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1481693 ']' 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:46.527 14:31:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.527 [2024-07-10 14:31:55.803728] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:29:46.527 [2024-07-10 14:31:55.803879] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.527 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.527 [2024-07-10 14:31:55.934309] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:46.785 [2024-07-10 14:31:56.163347] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.785 [2024-07-10 14:31:56.163411] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.785 [2024-07-10 14:31:56.163442] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:46.785 [2024-07-10 14:31:56.163461] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:46.785 [2024-07-10 14:31:56.163488] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.785 [2024-07-10 14:31:56.163594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.785 [2024-07-10 14:31:56.163645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:46.785 [2024-07-10 14:31:56.163670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.785 [2024-07-10 14:31:56.163681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:47.356 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:47.356 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:29:47.356 14:31:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:47.356 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:47.356 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.356 14:31:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.356 14:31:56 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:47.356 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.356 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.356 [2024-07-10 14:31:56.723886] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.357 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.357 14:31:56 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:47.357 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.357 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.357 Malloc0 00:29:47.357 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.357 14:31:56 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:47.357 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.357 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.357 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.357 14:31:56 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:47.357 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.357 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.357 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.357 14:31:56 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:47.357 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.357 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.357 [2024-07-10 14:31:56.830360] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.660 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.660 14:31:56 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:47.660 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.660 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.660 [ 00:29:47.660 { 00:29:47.660 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:47.660 "subtype": "Discovery", 00:29:47.660 "listen_addresses": [], 00:29:47.660 "allow_any_host": true, 00:29:47.660 "hosts": [] 00:29:47.660 }, 00:29:47.660 { 00:29:47.660 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:47.660 "subtype": "NVMe", 00:29:47.660 "listen_addresses": [ 00:29:47.660 { 00:29:47.660 "trtype": "TCP", 00:29:47.660 "adrfam": "IPv4", 00:29:47.660 "traddr": "10.0.0.2", 00:29:47.660 "trsvcid": "4420" 00:29:47.660 } 00:29:47.660 ], 00:29:47.660 "allow_any_host": true, 00:29:47.660 "hosts": [], 00:29:47.660 "serial_number": "SPDK00000000000001", 00:29:47.660 "model_number": "SPDK bdev Controller", 00:29:47.661 "max_namespaces": 2, 00:29:47.661 "min_cntlid": 1, 00:29:47.661 "max_cntlid": 65519, 00:29:47.661 "namespaces": [ 00:29:47.661 { 00:29:47.661 "nsid": 1, 00:29:47.661 "bdev_name": "Malloc0", 00:29:47.661 "name": "Malloc0", 00:29:47.661 "nguid": "0613698EB3F54E458FFB9EC60FE0BC3F", 00:29:47.661 "uuid": "0613698e-b3f5-4e45-8ffb-9ec60fe0bc3f" 00:29:47.661 } 00:29:47.661 ] 00:29:47.661 } 00:29:47.661 ] 00:29:47.661 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.661 14:31:56 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:47.661 14:31:56 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:47.661 14:31:56 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1481845 00:29:47.661 14:31:56 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:47.661 14:31:56 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:47.661 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:29:47.661 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:47.661 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:29:47.661 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:29:47.661 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:47.661 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:47.661 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:29:47.661 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:29:47.661 14:31:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:47.661 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.661 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:47.661 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:29:47.661 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:29:47.661 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:47.942 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:47.942 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 3 -lt 200 ']' 00:29:47.942 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=4 00:29:47.942 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:47.942 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:47.942 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:47.942 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:29:47.942 14:31:57 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:47.942 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.942 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.942 Malloc1 00:29:47.942 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.942 14:31:57 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:47.942 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.942 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:48.200 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.200 14:31:57 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:48.200 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.200 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:48.200 [ 00:29:48.200 { 00:29:48.200 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:48.200 "subtype": "Discovery", 00:29:48.200 "listen_addresses": [], 00:29:48.200 "allow_any_host": true, 00:29:48.200 "hosts": [] 00:29:48.200 }, 00:29:48.200 { 00:29:48.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.200 "subtype": "NVMe", 00:29:48.200 "listen_addresses": [ 00:29:48.200 { 00:29:48.200 "trtype": "TCP", 00:29:48.200 "adrfam": "IPv4", 00:29:48.200 "traddr": "10.0.0.2", 00:29:48.200 "trsvcid": "4420" 00:29:48.200 } 00:29:48.200 ], 00:29:48.200 "allow_any_host": true, 00:29:48.200 "hosts": [], 00:29:48.200 "serial_number": "SPDK00000000000001", 00:29:48.200 "model_number": "SPDK bdev Controller", 00:29:48.200 "max_namespaces": 2, 00:29:48.200 "min_cntlid": 1, 00:29:48.200 "max_cntlid": 65519, 00:29:48.200 "namespaces": [ 00:29:48.200 { 00:29:48.200 "nsid": 1, 00:29:48.200 "bdev_name": "Malloc0", 00:29:48.200 "name": "Malloc0", 00:29:48.200 "nguid": "0613698EB3F54E458FFB9EC60FE0BC3F", 00:29:48.200 "uuid": "0613698e-b3f5-4e45-8ffb-9ec60fe0bc3f" 00:29:48.200 }, 00:29:48.200 { 00:29:48.200 "nsid": 2, 00:29:48.200 "bdev_name": "Malloc1", 00:29:48.200 "name": "Malloc1", 00:29:48.200 "nguid": "585DC64702FF44808D1E06C7245F566A", 00:29:48.200 "uuid": "585dc647-02ff-4480-8d1e-06c7245f566a" 00:29:48.200 } 00:29:48.200 ] 00:29:48.200 } 00:29:48.200 ] 00:29:48.200 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.200 14:31:57 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1481845 00:29:48.200 Asynchronous Event Request test 00:29:48.200 Attaching to 10.0.0.2 00:29:48.200 Attached to 10.0.0.2 00:29:48.200 Registering asynchronous event callbacks... 00:29:48.200 Starting namespace attribute notice tests for all controllers... 00:29:48.200 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:48.200 aer_cb - Changed Namespace 00:29:48.200 Cleaning up... 00:29:48.200 14:31:57 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:48.200 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.200 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:48.200 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.200 14:31:57 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:48.200 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.200 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:48.458 rmmod nvme_tcp 00:29:48.458 rmmod nvme_fabrics 00:29:48.458 rmmod nvme_keyring 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1481693 ']' 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1481693 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1481693 ']' 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1481693 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1481693 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1481693' 00:29:48.458 killing process with pid 1481693 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1481693 00:29:48.458 14:31:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1481693 00:29:49.832 14:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:49.832 14:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:49.832 14:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:49.832 14:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:49.832 14:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:49.832 14:31:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.832 14:31:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:49.832 14:31:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.361 14:32:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:52.361 00:29:52.361 real 0m7.710s 00:29:52.361 user 0m11.327s 00:29:52.361 sys 0m2.139s 00:29:52.361 14:32:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:52.361 14:32:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:52.361 ************************************ 00:29:52.361 END TEST nvmf_aer 00:29:52.361 ************************************ 00:29:52.361 14:32:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:52.361 14:32:01 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:52.361 14:32:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:52.361 14:32:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:52.361 14:32:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.361 ************************************ 00:29:52.361 START TEST nvmf_async_init 00:29:52.361 ************************************ 00:29:52.361 14:32:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:52.361 * Looking for test storage... 00:29:52.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:52.361 14:32:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.361 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:52.361 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6441a00dfc364b6a9d1423ee72634e5d 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:29:52.362 14:32:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:54.259 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:54.259 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:54.259 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:54.260 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:54.260 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:54.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:54.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:29:54.260 00:29:54.260 --- 10.0.0.2 ping statistics --- 00:29:54.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.260 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:54.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:54.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:29:54.260 00:29:54.260 --- 10.0.0.1 ping statistics --- 00:29:54.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.260 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1484042 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1484042 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1484042 ']' 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:54.260 14:32:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.260 [2024-07-10 14:32:03.576237] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:29:54.260 [2024-07-10 14:32:03.576369] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.260 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.260 [2024-07-10 14:32:03.717650] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.518 [2024-07-10 14:32:03.975543] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.518 [2024-07-10 14:32:03.975614] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.518 [2024-07-10 14:32:03.975642] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.518 [2024-07-10 14:32:03.975668] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.518 [2024-07-10 14:32:03.975688] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.518 [2024-07-10 14:32:03.975733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.085 [2024-07-10 14:32:04.523600] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.085 null0 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6441a00dfc364b6a9d1423ee72634e5d 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.085 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.085 [2024-07-10 14:32:04.563938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.343 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.343 14:32:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:55.343 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.343 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.343 nvme0n1 00:29:55.343 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.343 14:32:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:55.343 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.343 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.343 [ 00:29:55.343 { 00:29:55.343 "name": "nvme0n1", 00:29:55.343 "aliases": [ 00:29:55.343 "6441a00d-fc36-4b6a-9d14-23ee72634e5d" 00:29:55.343 ], 00:29:55.343 "product_name": "NVMe disk", 00:29:55.343 "block_size": 512, 00:29:55.343 "num_blocks": 2097152, 00:29:55.343 "uuid": "6441a00d-fc36-4b6a-9d14-23ee72634e5d", 00:29:55.343 "assigned_rate_limits": { 00:29:55.343 "rw_ios_per_sec": 0, 00:29:55.343 "rw_mbytes_per_sec": 0, 00:29:55.343 "r_mbytes_per_sec": 0, 00:29:55.343 "w_mbytes_per_sec": 0 00:29:55.343 }, 00:29:55.343 "claimed": false, 00:29:55.343 "zoned": false, 00:29:55.343 "supported_io_types": { 00:29:55.343 "read": true, 00:29:55.343 "write": true, 00:29:55.343 "unmap": false, 00:29:55.343 "flush": true, 00:29:55.343 "reset": true, 00:29:55.343 "nvme_admin": true, 00:29:55.343 "nvme_io": true, 00:29:55.343 "nvme_io_md": false, 00:29:55.343 "write_zeroes": true, 00:29:55.343 "zcopy": false, 00:29:55.343 "get_zone_info": false, 00:29:55.343 "zone_management": false, 00:29:55.343 "zone_append": false, 00:29:55.343 "compare": true, 00:29:55.343 "compare_and_write": true, 00:29:55.343 "abort": true, 00:29:55.343 "seek_hole": false, 00:29:55.343 "seek_data": false, 00:29:55.343 "copy": true, 00:29:55.343 "nvme_iov_md": false 00:29:55.343 }, 00:29:55.343 "memory_domains": [ 00:29:55.343 { 00:29:55.343 "dma_device_id": "system", 00:29:55.343 "dma_device_type": 1 00:29:55.343 } 00:29:55.343 ], 00:29:55.343 "driver_specific": { 00:29:55.343 "nvme": [ 00:29:55.343 { 00:29:55.343 "trid": { 00:29:55.343 "trtype": "TCP", 00:29:55.343 "adrfam": "IPv4", 00:29:55.343 "traddr": "10.0.0.2", 00:29:55.343 "trsvcid": "4420", 00:29:55.343 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:55.343 }, 00:29:55.343 "ctrlr_data": { 00:29:55.343 "cntlid": 1, 00:29:55.343 "vendor_id": "0x8086", 00:29:55.343 "model_number": "SPDK bdev Controller", 00:29:55.343 "serial_number": "00000000000000000000", 00:29:55.343 "firmware_revision": "24.09", 00:29:55.343 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:55.343 "oacs": { 00:29:55.343 "security": 0, 00:29:55.343 "format": 0, 00:29:55.343 "firmware": 0, 00:29:55.343 "ns_manage": 0 00:29:55.343 }, 00:29:55.343 "multi_ctrlr": true, 00:29:55.343 "ana_reporting": false 00:29:55.343 }, 00:29:55.343 "vs": { 00:29:55.343 "nvme_version": "1.3" 00:29:55.343 }, 00:29:55.343 "ns_data": { 00:29:55.343 "id": 1, 00:29:55.343 "can_share": true 00:29:55.343 } 00:29:55.343 } 00:29:55.343 ], 00:29:55.343 "mp_policy": "active_passive" 00:29:55.343 } 00:29:55.343 } 00:29:55.343 ] 00:29:55.343 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.343 14:32:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:55.343 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.343 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.343 [2024-07-10 14:32:04.820344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:55.343 [2024-07-10 14:32:04.820502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:29:55.602 [2024-07-10 14:32:04.952664] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:55.602 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.602 14:32:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:55.602 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.602 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.602 [ 00:29:55.602 { 00:29:55.602 "name": "nvme0n1", 00:29:55.602 "aliases": [ 00:29:55.602 "6441a00d-fc36-4b6a-9d14-23ee72634e5d" 00:29:55.602 ], 00:29:55.602 "product_name": "NVMe disk", 00:29:55.602 "block_size": 512, 00:29:55.602 "num_blocks": 2097152, 00:29:55.602 "uuid": "6441a00d-fc36-4b6a-9d14-23ee72634e5d", 00:29:55.602 "assigned_rate_limits": { 00:29:55.602 "rw_ios_per_sec": 0, 00:29:55.602 "rw_mbytes_per_sec": 0, 00:29:55.602 "r_mbytes_per_sec": 0, 00:29:55.602 "w_mbytes_per_sec": 0 00:29:55.602 }, 00:29:55.602 "claimed": false, 00:29:55.602 "zoned": false, 00:29:55.602 "supported_io_types": { 00:29:55.602 "read": true, 00:29:55.602 "write": true, 00:29:55.602 "unmap": false, 00:29:55.602 "flush": true, 00:29:55.602 "reset": true, 00:29:55.602 "nvme_admin": true, 00:29:55.602 "nvme_io": true, 00:29:55.602 "nvme_io_md": false, 00:29:55.602 "write_zeroes": true, 00:29:55.602 "zcopy": false, 00:29:55.602 "get_zone_info": false, 00:29:55.602 "zone_management": false, 00:29:55.602 "zone_append": false, 00:29:55.602 "compare": true, 00:29:55.602 "compare_and_write": true, 00:29:55.602 "abort": true, 00:29:55.602 "seek_hole": false, 00:29:55.602 "seek_data": false, 00:29:55.602 "copy": true, 00:29:55.602 "nvme_iov_md": false 00:29:55.602 }, 00:29:55.602 "memory_domains": [ 00:29:55.602 { 00:29:55.602 "dma_device_id": "system", 00:29:55.602 "dma_device_type": 1 00:29:55.602 } 00:29:55.602 ], 00:29:55.602 "driver_specific": { 00:29:55.602 "nvme": [ 00:29:55.602 { 00:29:55.602 "trid": { 00:29:55.602 "trtype": "TCP", 00:29:55.602 "adrfam": "IPv4", 00:29:55.602 "traddr": "10.0.0.2", 00:29:55.602 "trsvcid": "4420", 00:29:55.602 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:55.602 }, 00:29:55.602 "ctrlr_data": { 00:29:55.602 "cntlid": 2, 00:29:55.602 "vendor_id": "0x8086", 00:29:55.602 "model_number": "SPDK bdev Controller", 00:29:55.602 "serial_number": "00000000000000000000", 00:29:55.602 "firmware_revision": "24.09", 00:29:55.602 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:55.602 "oacs": { 00:29:55.602 "security": 0, 00:29:55.602 "format": 0, 00:29:55.602 "firmware": 0, 00:29:55.602 "ns_manage": 0 00:29:55.602 }, 00:29:55.602 "multi_ctrlr": true, 00:29:55.602 "ana_reporting": false 00:29:55.602 }, 00:29:55.602 "vs": { 00:29:55.602 "nvme_version": "1.3" 00:29:55.602 }, 00:29:55.602 "ns_data": { 00:29:55.602 "id": 1, 00:29:55.602 "can_share": true 00:29:55.602 } 00:29:55.602 } 00:29:55.602 ], 00:29:55.602 "mp_policy": "active_passive" 00:29:55.602 } 00:29:55.602 } 00:29:55.602 ] 00:29:55.602 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.602 14:32:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.602 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.602 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.602 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.602 14:32:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:55.602 14:32:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.4YEWK2WxfV 00:29:55.602 14:32:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:55.602 14:32:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.4YEWK2WxfV 00:29:55.602 14:32:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:55.602 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.602 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.602 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.602 14:32:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:55.602 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.602 14:32:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.602 [2024-07-10 14:32:05.001103] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:55.602 [2024-07-10 14:32:05.001293] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:55.602 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.602 14:32:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4YEWK2WxfV 00:29:55.602 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.602 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.602 [2024-07-10 14:32:05.009112] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:55.602 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.602 14:32:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4YEWK2WxfV 00:29:55.602 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.602 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.602 [2024-07-10 14:32:05.017125] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:55.602 [2024-07-10 14:32:05.017251] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:29:55.861 nvme0n1 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.861 [ 00:29:55.861 { 00:29:55.861 "name": "nvme0n1", 00:29:55.861 "aliases": [ 00:29:55.861 "6441a00d-fc36-4b6a-9d14-23ee72634e5d" 00:29:55.861 ], 00:29:55.861 "product_name": "NVMe disk", 00:29:55.861 "block_size": 512, 00:29:55.861 "num_blocks": 2097152, 00:29:55.861 "uuid": "6441a00d-fc36-4b6a-9d14-23ee72634e5d", 00:29:55.861 "assigned_rate_limits": { 00:29:55.861 "rw_ios_per_sec": 0, 00:29:55.861 "rw_mbytes_per_sec": 0, 00:29:55.861 "r_mbytes_per_sec": 0, 00:29:55.861 "w_mbytes_per_sec": 0 00:29:55.861 }, 00:29:55.861 "claimed": false, 00:29:55.861 "zoned": false, 00:29:55.861 "supported_io_types": { 00:29:55.861 "read": true, 00:29:55.861 "write": true, 00:29:55.861 "unmap": false, 00:29:55.861 "flush": true, 00:29:55.861 "reset": true, 00:29:55.861 "nvme_admin": true, 00:29:55.861 "nvme_io": true, 00:29:55.861 "nvme_io_md": false, 00:29:55.861 "write_zeroes": true, 00:29:55.861 "zcopy": false, 00:29:55.861 "get_zone_info": false, 00:29:55.861 "zone_management": false, 00:29:55.861 "zone_append": false, 00:29:55.861 "compare": true, 00:29:55.861 "compare_and_write": true, 00:29:55.861 "abort": true, 00:29:55.861 "seek_hole": false, 00:29:55.861 "seek_data": false, 00:29:55.861 "copy": true, 00:29:55.861 "nvme_iov_md": false 00:29:55.861 }, 00:29:55.861 "memory_domains": [ 00:29:55.861 { 00:29:55.861 "dma_device_id": "system", 00:29:55.861 "dma_device_type": 1 00:29:55.861 } 00:29:55.861 ], 00:29:55.861 "driver_specific": { 00:29:55.861 "nvme": [ 00:29:55.861 { 00:29:55.861 "trid": { 00:29:55.861 "trtype": "TCP", 00:29:55.861 "adrfam": "IPv4", 00:29:55.861 "traddr": "10.0.0.2", 00:29:55.861 "trsvcid": "4421", 00:29:55.861 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:55.861 }, 00:29:55.861 "ctrlr_data": { 00:29:55.861 "cntlid": 3, 00:29:55.861 "vendor_id": "0x8086", 00:29:55.861 "model_number": "SPDK bdev Controller", 00:29:55.861 "serial_number": "00000000000000000000", 00:29:55.861 "firmware_revision": "24.09", 00:29:55.861 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:55.861 "oacs": { 00:29:55.861 "security": 0, 00:29:55.861 "format": 0, 00:29:55.861 "firmware": 0, 00:29:55.861 "ns_manage": 0 00:29:55.861 }, 00:29:55.861 "multi_ctrlr": true, 00:29:55.861 "ana_reporting": false 00:29:55.861 }, 00:29:55.861 "vs": { 00:29:55.861 "nvme_version": "1.3" 00:29:55.861 }, 00:29:55.861 "ns_data": { 00:29:55.861 "id": 1, 00:29:55.861 "can_share": true 00:29:55.861 } 00:29:55.861 } 00:29:55.861 ], 00:29:55.861 "mp_policy": "active_passive" 00:29:55.861 } 00:29:55.861 } 00:29:55.861 ] 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.4YEWK2WxfV 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:55.861 rmmod nvme_tcp 00:29:55.861 rmmod nvme_fabrics 00:29:55.861 rmmod nvme_keyring 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1484042 ']' 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1484042 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1484042 ']' 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1484042 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1484042 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1484042' 00:29:55.861 killing process with pid 1484042 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1484042 00:29:55.861 [2024-07-10 14:32:05.217157] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:29:55.861 14:32:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1484042 00:29:55.861 [2024-07-10 14:32:05.217225] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:57.236 14:32:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:57.236 14:32:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:57.236 14:32:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:57.236 14:32:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:57.236 14:32:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:57.236 14:32:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.236 14:32:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:57.236 14:32:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.139 14:32:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:59.139 00:29:59.139 real 0m7.201s 00:29:59.139 user 0m3.852s 00:29:59.139 sys 0m1.963s 00:29:59.139 14:32:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:59.139 14:32:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.139 ************************************ 00:29:59.139 END TEST nvmf_async_init 00:29:59.139 ************************************ 00:29:59.139 14:32:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:59.139 14:32:08 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:59.139 14:32:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:59.139 14:32:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:59.139 14:32:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:59.139 ************************************ 00:29:59.139 START TEST dma 00:29:59.139 ************************************ 00:29:59.139 14:32:08 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:59.139 * Looking for test storage... 00:29:59.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:59.139 14:32:08 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.139 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:29:59.139 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.139 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.139 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.139 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.139 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.139 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.139 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.139 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.139 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.139 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.139 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:59.139 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:59.139 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.139 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.139 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.139 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:59.139 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.398 14:32:08 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.398 14:32:08 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.398 14:32:08 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.398 14:32:08 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.398 14:32:08 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.398 14:32:08 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.398 14:32:08 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:29:59.398 14:32:08 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.398 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:29:59.398 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:59.398 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:59.398 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:59.398 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.398 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.398 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:59.398 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:59.398 14:32:08 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:59.398 14:32:08 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:59.398 14:32:08 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:29:59.398 00:29:59.398 real 0m0.067s 00:29:59.398 user 0m0.028s 00:29:59.398 sys 0m0.044s 00:29:59.398 14:32:08 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:59.398 14:32:08 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:29:59.398 ************************************ 00:29:59.398 END TEST dma 00:29:59.398 ************************************ 00:29:59.398 14:32:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:59.398 14:32:08 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:59.398 14:32:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:59.398 14:32:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:59.398 14:32:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:59.398 ************************************ 00:29:59.398 START TEST nvmf_identify 00:29:59.398 ************************************ 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:59.398 * Looking for test storage... 00:29:59.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:29:59.398 14:32:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:01.300 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:01.300 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:30:01.300 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:01.300 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:01.300 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:01.300 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:01.300 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:01.300 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:01.301 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:01.301 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:01.301 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:01.301 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:01.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:01.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:30:01.301 00:30:01.301 --- 10.0.0.2 ping statistics --- 00:30:01.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.301 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:01.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:01.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:30:01.301 00:30:01.301 --- 10.0.0.1 ping statistics --- 00:30:01.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.301 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1486317 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1486317 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1486317 ']' 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:01.301 14:32:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:01.559 [2024-07-10 14:32:10.858184] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:30:01.560 [2024-07-10 14:32:10.858334] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:01.560 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.560 [2024-07-10 14:32:11.001153] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:01.818 [2024-07-10 14:32:11.262036] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.818 [2024-07-10 14:32:11.262115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.818 [2024-07-10 14:32:11.262143] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.818 [2024-07-10 14:32:11.262164] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.818 [2024-07-10 14:32:11.262186] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.818 [2024-07-10 14:32:11.262315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.818 [2024-07-10 14:32:11.262384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:01.818 [2024-07-10 14:32:11.262478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.818 [2024-07-10 14:32:11.262489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:02.383 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:02.383 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:30:02.383 14:32:11 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:02.383 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.383 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:02.383 [2024-07-10 14:32:11.762663] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:02.383 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.383 14:32:11 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:02.383 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:02.383 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:02.383 14:32:11 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:02.383 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.383 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:02.642 Malloc0 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:02.642 [2024-07-10 14:32:11.892772] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:02.642 [ 00:30:02.642 { 00:30:02.642 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:02.642 "subtype": "Discovery", 00:30:02.642 "listen_addresses": [ 00:30:02.642 { 00:30:02.642 "trtype": "TCP", 00:30:02.642 "adrfam": "IPv4", 00:30:02.642 "traddr": "10.0.0.2", 00:30:02.642 "trsvcid": "4420" 00:30:02.642 } 00:30:02.642 ], 00:30:02.642 "allow_any_host": true, 00:30:02.642 "hosts": [] 00:30:02.642 }, 00:30:02.642 { 00:30:02.642 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:02.642 "subtype": "NVMe", 00:30:02.642 "listen_addresses": [ 00:30:02.642 { 00:30:02.642 "trtype": "TCP", 00:30:02.642 "adrfam": "IPv4", 00:30:02.642 "traddr": "10.0.0.2", 00:30:02.642 "trsvcid": "4420" 00:30:02.642 } 00:30:02.642 ], 00:30:02.642 "allow_any_host": true, 00:30:02.642 "hosts": [], 00:30:02.642 "serial_number": "SPDK00000000000001", 00:30:02.642 "model_number": "SPDK bdev Controller", 00:30:02.642 "max_namespaces": 32, 00:30:02.642 "min_cntlid": 1, 00:30:02.642 "max_cntlid": 65519, 00:30:02.642 "namespaces": [ 00:30:02.642 { 00:30:02.642 "nsid": 1, 00:30:02.642 "bdev_name": "Malloc0", 00:30:02.642 "name": "Malloc0", 00:30:02.642 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:02.642 "eui64": "ABCDEF0123456789", 00:30:02.642 "uuid": "25f0c380-7fc8-4eb5-8b49-a45cf09e1d42" 00:30:02.642 } 00:30:02.642 ] 00:30:02.642 } 00:30:02.642 ] 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.642 14:32:11 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:02.642 [2024-07-10 14:32:11.963496] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:30:02.642 [2024-07-10 14:32:11.963599] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486472 ] 00:30:02.642 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.642 [2024-07-10 14:32:12.024929] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:30:02.642 [2024-07-10 14:32:12.025053] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:02.642 [2024-07-10 14:32:12.025075] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:02.642 [2024-07-10 14:32:12.025119] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:02.642 [2024-07-10 14:32:12.025142] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:02.642 [2024-07-10 14:32:12.028516] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:30:02.642 [2024-07-10 14:32:12.028592] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:02.642 [2024-07-10 14:32:12.036528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:02.642 [2024-07-10 14:32:12.036561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:02.642 [2024-07-10 14:32:12.036578] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:02.642 [2024-07-10 14:32:12.036589] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:02.642 [2024-07-10 14:32:12.036659] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.642 [2024-07-10 14:32:12.036679] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.642 [2024-07-10 14:32:12.036698] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:02.642 [2024-07-10 14:32:12.036740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:02.642 [2024-07-10 14:32:12.036779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:02.642 [2024-07-10 14:32:12.043453] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.642 [2024-07-10 14:32:12.043489] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.642 [2024-07-10 14:32:12.043501] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.642 [2024-07-10 14:32:12.043514] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:02.642 [2024-07-10 14:32:12.043545] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:02.643 [2024-07-10 14:32:12.043568] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:30:02.643 [2024-07-10 14:32:12.043584] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:30:02.643 [2024-07-10 14:32:12.043616] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.043639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.043652] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:02.643 [2024-07-10 14:32:12.043672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.643 [2024-07-10 14:32:12.043722] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:02.643 [2024-07-10 14:32:12.043916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.643 [2024-07-10 14:32:12.043939] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.643 [2024-07-10 14:32:12.043951] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.043964] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:02.643 [2024-07-10 14:32:12.043980] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:30:02.643 [2024-07-10 14:32:12.044007] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:30:02.643 [2024-07-10 14:32:12.044033] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.044047] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.044059] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:02.643 [2024-07-10 14:32:12.044083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.643 [2024-07-10 14:32:12.044117] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:02.643 [2024-07-10 14:32:12.044275] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.643 [2024-07-10 14:32:12.044296] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.643 [2024-07-10 14:32:12.044308] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.044319] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:02.643 [2024-07-10 14:32:12.044334] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:30:02.643 [2024-07-10 14:32:12.044362] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:30:02.643 [2024-07-10 14:32:12.044383] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.044401] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.044416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:02.643 [2024-07-10 14:32:12.044444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.643 [2024-07-10 14:32:12.044486] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:02.643 [2024-07-10 14:32:12.044634] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.643 [2024-07-10 14:32:12.044656] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.643 [2024-07-10 14:32:12.044672] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.044684] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:02.643 [2024-07-10 14:32:12.044700] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:02.643 [2024-07-10 14:32:12.044745] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.044763] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.044775] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:02.643 [2024-07-10 14:32:12.044794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.643 [2024-07-10 14:32:12.044829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:02.643 [2024-07-10 14:32:12.044982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.643 [2024-07-10 14:32:12.045013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.643 [2024-07-10 14:32:12.045025] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.045036] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:02.643 [2024-07-10 14:32:12.045050] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:30:02.643 [2024-07-10 14:32:12.045072] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:30:02.643 [2024-07-10 14:32:12.045094] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:02.643 [2024-07-10 14:32:12.045223] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:30:02.643 [2024-07-10 14:32:12.045238] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:02.643 [2024-07-10 14:32:12.045260] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.045278] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.045290] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:02.643 [2024-07-10 14:32:12.045313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.643 [2024-07-10 14:32:12.045345] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:02.643 [2024-07-10 14:32:12.045529] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.643 [2024-07-10 14:32:12.045558] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.643 [2024-07-10 14:32:12.045572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.045583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:02.643 [2024-07-10 14:32:12.045597] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:02.643 [2024-07-10 14:32:12.045633] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.045649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.045661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:02.643 [2024-07-10 14:32:12.045681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.643 [2024-07-10 14:32:12.045712] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:02.643 [2024-07-10 14:32:12.045877] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.643 [2024-07-10 14:32:12.045898] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.643 [2024-07-10 14:32:12.045913] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.045925] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:02.643 [2024-07-10 14:32:12.045939] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:02.643 [2024-07-10 14:32:12.045967] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:30:02.643 [2024-07-10 14:32:12.045990] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:30:02.643 [2024-07-10 14:32:12.046011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:30:02.643 [2024-07-10 14:32:12.046038] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.046058] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:02.643 [2024-07-10 14:32:12.046079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.643 [2024-07-10 14:32:12.046126] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:02.643 [2024-07-10 14:32:12.046373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:02.643 [2024-07-10 14:32:12.046395] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:02.643 [2024-07-10 14:32:12.046412] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.046434] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:02.643 [2024-07-10 14:32:12.046450] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:02.643 [2024-07-10 14:32:12.046463] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.046497] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.046514] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.086577] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.643 [2024-07-10 14:32:12.086606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.643 [2024-07-10 14:32:12.086619] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.086631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:02.643 [2024-07-10 14:32:12.086666] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:30:02.643 [2024-07-10 14:32:12.086684] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:30:02.643 [2024-07-10 14:32:12.086702] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:30:02.643 [2024-07-10 14:32:12.086717] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:30:02.643 [2024-07-10 14:32:12.086733] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:30:02.643 [2024-07-10 14:32:12.086748] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:30:02.643 [2024-07-10 14:32:12.086771] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:30:02.643 [2024-07-10 14:32:12.086796] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.086811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.086823] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:02.643 [2024-07-10 14:32:12.086849] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:02.643 [2024-07-10 14:32:12.086902] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:02.643 [2024-07-10 14:32:12.087090] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.643 [2024-07-10 14:32:12.087113] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.643 [2024-07-10 14:32:12.087125] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.643 [2024-07-10 14:32:12.087136] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:02.643 [2024-07-10 14:32:12.087156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.087170] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.087187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:02.644 [2024-07-10 14:32:12.087207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:02.644 [2024-07-10 14:32:12.087225] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.087238] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.087248] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:02.644 [2024-07-10 14:32:12.087264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:02.644 [2024-07-10 14:32:12.087280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.087291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.087301] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:02.644 [2024-07-10 14:32:12.087331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:02.644 [2024-07-10 14:32:12.087347] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.087359] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.087373] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:02.644 [2024-07-10 14:32:12.087406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:02.644 [2024-07-10 14:32:12.087419] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:30:02.644 [2024-07-10 14:32:12.091472] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:02.644 [2024-07-10 14:32:12.091500] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.091514] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:02.644 [2024-07-10 14:32:12.091534] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.644 [2024-07-10 14:32:12.091576] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:02.644 [2024-07-10 14:32:12.091595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:02.644 [2024-07-10 14:32:12.091607] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:02.644 [2024-07-10 14:32:12.091619] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:02.644 [2024-07-10 14:32:12.091630] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:02.644 [2024-07-10 14:32:12.091832] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.644 [2024-07-10 14:32:12.091853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.644 [2024-07-10 14:32:12.091865] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.091876] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:02.644 [2024-07-10 14:32:12.091892] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:30:02.644 [2024-07-10 14:32:12.091908] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:30:02.644 [2024-07-10 14:32:12.091941] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.091958] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:02.644 [2024-07-10 14:32:12.091978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.644 [2024-07-10 14:32:12.092010] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:02.644 [2024-07-10 14:32:12.092199] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:02.644 [2024-07-10 14:32:12.092223] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:02.644 [2024-07-10 14:32:12.092235] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.092247] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:02.644 [2024-07-10 14:32:12.092267] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:02.644 [2024-07-10 14:32:12.092280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.092300] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.092313] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.092334] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.644 [2024-07-10 14:32:12.092351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.644 [2024-07-10 14:32:12.092362] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.092378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:02.644 [2024-07-10 14:32:12.092416] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:30:02.644 [2024-07-10 14:32:12.092490] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.092509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:02.644 [2024-07-10 14:32:12.092538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.644 [2024-07-10 14:32:12.092559] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.092573] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.092584] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:02.644 [2024-07-10 14:32:12.092602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:02.644 [2024-07-10 14:32:12.092634] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:02.644 [2024-07-10 14:32:12.092653] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:02.644 [2024-07-10 14:32:12.092980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:02.644 [2024-07-10 14:32:12.093001] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:02.644 [2024-07-10 14:32:12.093014] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.093026] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=1024, cccid=4 00:30:02.644 [2024-07-10 14:32:12.093039] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=1024 00:30:02.644 [2024-07-10 14:32:12.093058] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.093077] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.093090] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.093111] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.644 [2024-07-10 14:32:12.093128] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.644 [2024-07-10 14:32:12.093139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.644 [2024-07-10 14:32:12.093151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:02.905 [2024-07-10 14:32:12.133583] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.905 [2024-07-10 14:32:12.133615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.905 [2024-07-10 14:32:12.133628] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.905 [2024-07-10 14:32:12.133640] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:02.905 [2024-07-10 14:32:12.133680] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.905 [2024-07-10 14:32:12.133696] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:02.905 [2024-07-10 14:32:12.133718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.905 [2024-07-10 14:32:12.133762] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:02.905 [2024-07-10 14:32:12.133970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:02.905 [2024-07-10 14:32:12.133993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:02.906 [2024-07-10 14:32:12.134005] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:02.906 [2024-07-10 14:32:12.134015] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=3072, cccid=4 00:30:02.906 [2024-07-10 14:32:12.134032] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=3072 00:30:02.906 [2024-07-10 14:32:12.134045] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.906 [2024-07-10 14:32:12.134074] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:02.906 [2024-07-10 14:32:12.134090] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:02.906 [2024-07-10 14:32:12.134131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.906 [2024-07-10 14:32:12.134150] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.906 [2024-07-10 14:32:12.134161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.906 [2024-07-10 14:32:12.134172] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:02.906 [2024-07-10 14:32:12.134200] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.906 [2024-07-10 14:32:12.134217] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:02.906 [2024-07-10 14:32:12.134248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.906 [2024-07-10 14:32:12.134304] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:02.906 [2024-07-10 14:32:12.138452] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:02.906 [2024-07-10 14:32:12.138475] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:02.906 [2024-07-10 14:32:12.138487] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:02.906 [2024-07-10 14:32:12.138497] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8, cccid=4 00:30:02.906 [2024-07-10 14:32:12.138509] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=8 00:30:02.906 [2024-07-10 14:32:12.138520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.906 [2024-07-10 14:32:12.138536] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:02.906 [2024-07-10 14:32:12.138548] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:02.906 [2024-07-10 14:32:12.178469] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.906 [2024-07-10 14:32:12.178498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.906 [2024-07-10 14:32:12.178511] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.906 [2024-07-10 14:32:12.178523] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:02.906 ===================================================== 00:30:02.906 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:02.906 ===================================================== 00:30:02.906 Controller Capabilities/Features 00:30:02.906 ================================ 00:30:02.906 Vendor ID: 0000 00:30:02.906 Subsystem Vendor ID: 0000 00:30:02.906 Serial Number: .................... 00:30:02.906 Model Number: ........................................ 00:30:02.906 Firmware Version: 24.09 00:30:02.906 Recommended Arb Burst: 0 00:30:02.906 IEEE OUI Identifier: 00 00 00 00:30:02.906 Multi-path I/O 00:30:02.906 May have multiple subsystem ports: No 00:30:02.906 May have multiple controllers: No 00:30:02.906 Associated with SR-IOV VF: No 00:30:02.906 Max Data Transfer Size: 131072 00:30:02.906 Max Number of Namespaces: 0 00:30:02.906 Max Number of I/O Queues: 1024 00:30:02.906 NVMe Specification Version (VS): 1.3 00:30:02.906 NVMe Specification Version (Identify): 1.3 00:30:02.906 Maximum Queue Entries: 128 00:30:02.906 Contiguous Queues Required: Yes 00:30:02.906 Arbitration Mechanisms Supported 00:30:02.906 Weighted Round Robin: Not Supported 00:30:02.906 Vendor Specific: Not Supported 00:30:02.906 Reset Timeout: 15000 ms 00:30:02.906 Doorbell Stride: 4 bytes 00:30:02.906 NVM Subsystem Reset: Not Supported 00:30:02.906 Command Sets Supported 00:30:02.906 NVM Command Set: Supported 00:30:02.906 Boot Partition: Not Supported 00:30:02.906 Memory Page Size Minimum: 4096 bytes 00:30:02.906 Memory Page Size Maximum: 4096 bytes 00:30:02.906 Persistent Memory Region: Not Supported 00:30:02.906 Optional Asynchronous Events Supported 00:30:02.906 Namespace Attribute Notices: Not Supported 00:30:02.906 Firmware Activation Notices: Not Supported 00:30:02.906 ANA Change Notices: Not Supported 00:30:02.906 PLE Aggregate Log Change Notices: Not Supported 00:30:02.906 LBA Status Info Alert Notices: Not Supported 00:30:02.906 EGE Aggregate Log Change Notices: Not Supported 00:30:02.906 Normal NVM Subsystem Shutdown event: Not Supported 00:30:02.906 Zone Descriptor Change Notices: Not Supported 00:30:02.906 Discovery Log Change Notices: Supported 00:30:02.906 Controller Attributes 00:30:02.906 128-bit Host Identifier: Not Supported 00:30:02.906 Non-Operational Permissive Mode: Not Supported 00:30:02.906 NVM Sets: Not Supported 00:30:02.906 Read Recovery Levels: Not Supported 00:30:02.906 Endurance Groups: Not Supported 00:30:02.906 Predictable Latency Mode: Not Supported 00:30:02.906 Traffic Based Keep ALive: Not Supported 00:30:02.906 Namespace Granularity: Not Supported 00:30:02.906 SQ Associations: Not Supported 00:30:02.906 UUID List: Not Supported 00:30:02.906 Multi-Domain Subsystem: Not Supported 00:30:02.906 Fixed Capacity Management: Not Supported 00:30:02.906 Variable Capacity Management: Not Supported 00:30:02.906 Delete Endurance Group: Not Supported 00:30:02.906 Delete NVM Set: Not Supported 00:30:02.906 Extended LBA Formats Supported: Not Supported 00:30:02.906 Flexible Data Placement Supported: Not Supported 00:30:02.906 00:30:02.906 Controller Memory Buffer Support 00:30:02.906 ================================ 00:30:02.906 Supported: No 00:30:02.906 00:30:02.906 Persistent Memory Region Support 00:30:02.906 ================================ 00:30:02.906 Supported: No 00:30:02.906 00:30:02.906 Admin Command Set Attributes 00:30:02.906 ============================ 00:30:02.906 Security Send/Receive: Not Supported 00:30:02.906 Format NVM: Not Supported 00:30:02.906 Firmware Activate/Download: Not Supported 00:30:02.906 Namespace Management: Not Supported 00:30:02.906 Device Self-Test: Not Supported 00:30:02.906 Directives: Not Supported 00:30:02.906 NVMe-MI: Not Supported 00:30:02.906 Virtualization Management: Not Supported 00:30:02.906 Doorbell Buffer Config: Not Supported 00:30:02.906 Get LBA Status Capability: Not Supported 00:30:02.906 Command & Feature Lockdown Capability: Not Supported 00:30:02.906 Abort Command Limit: 1 00:30:02.906 Async Event Request Limit: 4 00:30:02.906 Number of Firmware Slots: N/A 00:30:02.906 Firmware Slot 1 Read-Only: N/A 00:30:02.906 Firmware Activation Without Reset: N/A 00:30:02.906 Multiple Update Detection Support: N/A 00:30:02.906 Firmware Update Granularity: No Information Provided 00:30:02.906 Per-Namespace SMART Log: No 00:30:02.906 Asymmetric Namespace Access Log Page: Not Supported 00:30:02.906 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:02.906 Command Effects Log Page: Not Supported 00:30:02.906 Get Log Page Extended Data: Supported 00:30:02.906 Telemetry Log Pages: Not Supported 00:30:02.906 Persistent Event Log Pages: Not Supported 00:30:02.906 Supported Log Pages Log Page: May Support 00:30:02.906 Commands Supported & Effects Log Page: Not Supported 00:30:02.906 Feature Identifiers & Effects Log Page:May Support 00:30:02.906 NVMe-MI Commands & Effects Log Page: May Support 00:30:02.906 Data Area 4 for Telemetry Log: Not Supported 00:30:02.906 Error Log Page Entries Supported: 128 00:30:02.906 Keep Alive: Not Supported 00:30:02.906 00:30:02.906 NVM Command Set Attributes 00:30:02.906 ========================== 00:30:02.906 Submission Queue Entry Size 00:30:02.906 Max: 1 00:30:02.906 Min: 1 00:30:02.906 Completion Queue Entry Size 00:30:02.906 Max: 1 00:30:02.906 Min: 1 00:30:02.906 Number of Namespaces: 0 00:30:02.906 Compare Command: Not Supported 00:30:02.906 Write Uncorrectable Command: Not Supported 00:30:02.906 Dataset Management Command: Not Supported 00:30:02.906 Write Zeroes Command: Not Supported 00:30:02.906 Set Features Save Field: Not Supported 00:30:02.906 Reservations: Not Supported 00:30:02.906 Timestamp: Not Supported 00:30:02.906 Copy: Not Supported 00:30:02.906 Volatile Write Cache: Not Present 00:30:02.906 Atomic Write Unit (Normal): 1 00:30:02.906 Atomic Write Unit (PFail): 1 00:30:02.906 Atomic Compare & Write Unit: 1 00:30:02.906 Fused Compare & Write: Supported 00:30:02.906 Scatter-Gather List 00:30:02.906 SGL Command Set: Supported 00:30:02.906 SGL Keyed: Supported 00:30:02.906 SGL Bit Bucket Descriptor: Not Supported 00:30:02.906 SGL Metadata Pointer: Not Supported 00:30:02.906 Oversized SGL: Not Supported 00:30:02.906 SGL Metadata Address: Not Supported 00:30:02.906 SGL Offset: Supported 00:30:02.906 Transport SGL Data Block: Not Supported 00:30:02.906 Replay Protected Memory Block: Not Supported 00:30:02.906 00:30:02.906 Firmware Slot Information 00:30:02.906 ========================= 00:30:02.906 Active slot: 0 00:30:02.906 00:30:02.906 00:30:02.906 Error Log 00:30:02.906 ========= 00:30:02.906 00:30:02.906 Active Namespaces 00:30:02.906 ================= 00:30:02.906 Discovery Log Page 00:30:02.906 ================== 00:30:02.906 Generation Counter: 2 00:30:02.906 Number of Records: 2 00:30:02.906 Record Format: 0 00:30:02.906 00:30:02.906 Discovery Log Entry 0 00:30:02.906 ---------------------- 00:30:02.906 Transport Type: 3 (TCP) 00:30:02.906 Address Family: 1 (IPv4) 00:30:02.907 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:02.907 Entry Flags: 00:30:02.907 Duplicate Returned Information: 1 00:30:02.907 Explicit Persistent Connection Support for Discovery: 1 00:30:02.907 Transport Requirements: 00:30:02.907 Secure Channel: Not Required 00:30:02.907 Port ID: 0 (0x0000) 00:30:02.907 Controller ID: 65535 (0xffff) 00:30:02.907 Admin Max SQ Size: 128 00:30:02.907 Transport Service Identifier: 4420 00:30:02.907 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:02.907 Transport Address: 10.0.0.2 00:30:02.907 Discovery Log Entry 1 00:30:02.907 ---------------------- 00:30:02.907 Transport Type: 3 (TCP) 00:30:02.907 Address Family: 1 (IPv4) 00:30:02.907 Subsystem Type: 2 (NVM Subsystem) 00:30:02.907 Entry Flags: 00:30:02.907 Duplicate Returned Information: 0 00:30:02.907 Explicit Persistent Connection Support for Discovery: 0 00:30:02.907 Transport Requirements: 00:30:02.907 Secure Channel: Not Required 00:30:02.907 Port ID: 0 (0x0000) 00:30:02.907 Controller ID: 65535 (0xffff) 00:30:02.907 Admin Max SQ Size: 128 00:30:02.907 Transport Service Identifier: 4420 00:30:02.907 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:02.907 Transport Address: 10.0.0.2 [2024-07-10 14:32:12.178711] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:30:02.907 [2024-07-10 14:32:12.178742] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:02.907 [2024-07-10 14:32:12.178780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.907 [2024-07-10 14:32:12.178795] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:02.907 [2024-07-10 14:32:12.178809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.907 [2024-07-10 14:32:12.178821] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:02.907 [2024-07-10 14:32:12.178835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.907 [2024-07-10 14:32:12.178846] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:02.907 [2024-07-10 14:32:12.178860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.907 [2024-07-10 14:32:12.178889] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.907 [2024-07-10 14:32:12.178905] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.907 [2024-07-10 14:32:12.178916] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:02.907 [2024-07-10 14:32:12.178941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.907 [2024-07-10 14:32:12.178978] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:02.907 [2024-07-10 14:32:12.179153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.907 [2024-07-10 14:32:12.179177] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.907 [2024-07-10 14:32:12.179189] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.907 [2024-07-10 14:32:12.179201] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:02.907 [2024-07-10 14:32:12.179223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.907 [2024-07-10 14:32:12.179238] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.907 [2024-07-10 14:32:12.179249] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:02.907 [2024-07-10 14:32:12.179269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.907 [2024-07-10 14:32:12.179316] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:02.907 [2024-07-10 14:32:12.179508] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.907 [2024-07-10 14:32:12.179530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.907 [2024-07-10 14:32:12.179542] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.907 [2024-07-10 14:32:12.179553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:02.907 [2024-07-10 14:32:12.179567] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:30:02.907 [2024-07-10 14:32:12.179582] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:30:02.907 [2024-07-10 14:32:12.179609] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.907 [2024-07-10 14:32:12.179624] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.907 [2024-07-10 14:32:12.179636] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:02.907 [2024-07-10 14:32:12.179655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.907 [2024-07-10 14:32:12.179687] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:02.907 [2024-07-10 14:32:12.179839] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.907 [2024-07-10 14:32:12.179861] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.907 [2024-07-10 14:32:12.179873] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.907 [2024-07-10 14:32:12.179883] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:02.907 [2024-07-10 14:32:12.179911] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.907 [2024-07-10 14:32:12.179926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.907 [2024-07-10 14:32:12.179937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:02.907 [2024-07-10 14:32:12.179955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.907 [2024-07-10 14:32:12.179985] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:02.907 [2024-07-10 14:32:12.180134] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.907 [2024-07-10 14:32:12.180155] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.907 [2024-07-10 14:32:12.180170] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.907 [2024-07-10 14:32:12.180182] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:02.907 [2024-07-10 14:32:12.180209] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.907 [2024-07-10 14:32:12.180224] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.907 [2024-07-10 14:32:12.180235] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:02.907 [2024-07-10 14:32:12.180253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.907 [2024-07-10 14:32:12.180283] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:02.907 [2024-07-10 14:32:12.180442] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.907 [2024-07-10 14:32:12.180465] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.907 [2024-07-10 14:32:12.180476] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.907 [2024-07-10 14:32:12.180487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:02.907 [2024-07-10 14:32:12.180514] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.907 [2024-07-10 14:32:12.180529] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.907 [2024-07-10 14:32:12.180539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:02.907 [2024-07-10 14:32:12.180557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.907 [2024-07-10 14:32:12.180587] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:02.907 [2024-07-10 14:32:12.180736] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.907 [2024-07-10 14:32:12.180756] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.907 [2024-07-10 14:32:12.180767] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.180778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:02.908 [2024-07-10 14:32:12.180804] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.180819] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.180830] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:02.908 [2024-07-10 14:32:12.180847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.908 [2024-07-10 14:32:12.180877] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:02.908 [2024-07-10 14:32:12.181037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.908 [2024-07-10 14:32:12.181066] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.908 [2024-07-10 14:32:12.181079] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.181090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:02.908 [2024-07-10 14:32:12.181117] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.181131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.181142] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:02.908 [2024-07-10 14:32:12.181159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.908 [2024-07-10 14:32:12.181189] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:02.908 [2024-07-10 14:32:12.181339] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.908 [2024-07-10 14:32:12.181359] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.908 [2024-07-10 14:32:12.181374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.181385] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:02.908 [2024-07-10 14:32:12.181412] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.181435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.181447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:02.908 [2024-07-10 14:32:12.181470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.908 [2024-07-10 14:32:12.181502] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:02.908 [2024-07-10 14:32:12.181649] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.908 [2024-07-10 14:32:12.181671] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.908 [2024-07-10 14:32:12.181682] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.181693] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:02.908 [2024-07-10 14:32:12.181719] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.181734] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.181745] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:02.908 [2024-07-10 14:32:12.181762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.908 [2024-07-10 14:32:12.181792] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:02.908 [2024-07-10 14:32:12.181937] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.908 [2024-07-10 14:32:12.181959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.908 [2024-07-10 14:32:12.181970] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.181987] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:02.908 [2024-07-10 14:32:12.182027] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.182043] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.182054] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:02.908 [2024-07-10 14:32:12.182072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.908 [2024-07-10 14:32:12.182103] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:02.908 [2024-07-10 14:32:12.182256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.908 [2024-07-10 14:32:12.182276] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.908 [2024-07-10 14:32:12.182287] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.182298] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:02.908 [2024-07-10 14:32:12.182324] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.182339] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.182350] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:02.908 [2024-07-10 14:32:12.182367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.908 [2024-07-10 14:32:12.182413] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:02.908 [2024-07-10 14:32:12.186467] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.908 [2024-07-10 14:32:12.186488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.908 [2024-07-10 14:32:12.186503] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.186515] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:02.908 [2024-07-10 14:32:12.186543] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.186557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.186568] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:02.908 [2024-07-10 14:32:12.186586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.908 [2024-07-10 14:32:12.186616] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:02.908 [2024-07-10 14:32:12.186783] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.908 [2024-07-10 14:32:12.186805] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.908 [2024-07-10 14:32:12.186816] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.908 [2024-07-10 14:32:12.186827] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:02.908 [2024-07-10 14:32:12.186849] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:30:02.908 00:30:02.908 14:32:12 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:02.908 [2024-07-10 14:32:12.286988] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:30:02.908 [2024-07-10 14:32:12.287082] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486579 ] 00:30:02.908 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.909 [2024-07-10 14:32:12.343831] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:30:02.909 [2024-07-10 14:32:12.343948] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:02.909 [2024-07-10 14:32:12.343969] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:02.909 [2024-07-10 14:32:12.344000] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:02.909 [2024-07-10 14:32:12.344023] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:02.909 [2024-07-10 14:32:12.347501] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:30:02.909 [2024-07-10 14:32:12.347587] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:02.909 [2024-07-10 14:32:12.355450] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:02.909 [2024-07-10 14:32:12.355480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:02.909 [2024-07-10 14:32:12.355495] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:02.909 [2024-07-10 14:32:12.355506] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:02.909 [2024-07-10 14:32:12.355576] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.909 [2024-07-10 14:32:12.355596] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.909 [2024-07-10 14:32:12.355615] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:02.909 [2024-07-10 14:32:12.355644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:02.909 [2024-07-10 14:32:12.355696] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:02.909 [2024-07-10 14:32:12.363454] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.909 [2024-07-10 14:32:12.363485] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.909 [2024-07-10 14:32:12.363498] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.909 [2024-07-10 14:32:12.363511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:02.909 [2024-07-10 14:32:12.363557] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:02.909 [2024-07-10 14:32:12.363580] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:30:02.909 [2024-07-10 14:32:12.363597] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:30:02.909 [2024-07-10 14:32:12.363627] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.909 [2024-07-10 14:32:12.363643] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.909 [2024-07-10 14:32:12.363659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:02.909 [2024-07-10 14:32:12.363680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.909 [2024-07-10 14:32:12.363715] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:02.909 [2024-07-10 14:32:12.363914] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.909 [2024-07-10 14:32:12.363938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.909 [2024-07-10 14:32:12.363950] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.909 [2024-07-10 14:32:12.363962] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:02.909 [2024-07-10 14:32:12.363984] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:30:02.909 [2024-07-10 14:32:12.364008] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:30:02.909 [2024-07-10 14:32:12.364033] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.909 [2024-07-10 14:32:12.364047] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.909 [2024-07-10 14:32:12.364059] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:02.909 [2024-07-10 14:32:12.364082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.909 [2024-07-10 14:32:12.364116] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:02.909 [2024-07-10 14:32:12.364306] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.909 [2024-07-10 14:32:12.364328] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.909 [2024-07-10 14:32:12.364339] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.909 [2024-07-10 14:32:12.364350] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:02.909 [2024-07-10 14:32:12.364369] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:30:02.909 [2024-07-10 14:32:12.364393] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:30:02.909 [2024-07-10 14:32:12.364414] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.909 [2024-07-10 14:32:12.364436] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.909 [2024-07-10 14:32:12.364454] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:02.909 [2024-07-10 14:32:12.364491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.909 [2024-07-10 14:32:12.364530] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:02.909 [2024-07-10 14:32:12.364710] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.909 [2024-07-10 14:32:12.364730] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.909 [2024-07-10 14:32:12.364741] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.909 [2024-07-10 14:32:12.364752] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:02.909 [2024-07-10 14:32:12.364767] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:02.909 [2024-07-10 14:32:12.364794] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.909 [2024-07-10 14:32:12.364810] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.909 [2024-07-10 14:32:12.364825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:02.909 [2024-07-10 14:32:12.364845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.909 [2024-07-10 14:32:12.364878] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:02.909 [2024-07-10 14:32:12.365059] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.909 [2024-07-10 14:32:12.365086] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.909 [2024-07-10 14:32:12.365098] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.909 [2024-07-10 14:32:12.365109] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:02.909 [2024-07-10 14:32:12.365122] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:30:02.909 [2024-07-10 14:32:12.365137] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:30:02.909 [2024-07-10 14:32:12.365164] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:02.909 [2024-07-10 14:32:12.365281] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:30:02.909 [2024-07-10 14:32:12.365309] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:02.909 [2024-07-10 14:32:12.365331] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.909 [2024-07-10 14:32:12.365344] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.910 [2024-07-10 14:32:12.365355] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:02.910 [2024-07-10 14:32:12.365374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.910 [2024-07-10 14:32:12.365420] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:02.910 [2024-07-10 14:32:12.365614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.910 [2024-07-10 14:32:12.365636] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.910 [2024-07-10 14:32:12.365647] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.910 [2024-07-10 14:32:12.365662] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:02.910 [2024-07-10 14:32:12.365678] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:02.910 [2024-07-10 14:32:12.365706] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.910 [2024-07-10 14:32:12.365726] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.910 [2024-07-10 14:32:12.365738] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:02.910 [2024-07-10 14:32:12.365756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.910 [2024-07-10 14:32:12.365792] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:02.910 [2024-07-10 14:32:12.365975] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:02.910 [2024-07-10 14:32:12.365996] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:02.910 [2024-07-10 14:32:12.366006] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:02.910 [2024-07-10 14:32:12.366021] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:02.910 [2024-07-10 14:32:12.366036] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:02.910 [2024-07-10 14:32:12.366062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:30:02.910 [2024-07-10 14:32:12.366085] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:30:02.910 [2024-07-10 14:32:12.366109] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:30:02.910 [2024-07-10 14:32:12.366135] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:02.910 [2024-07-10 14:32:12.366150] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:02.910 [2024-07-10 14:32:12.366170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.910 [2024-07-10 14:32:12.366221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:02.910 [2024-07-10 14:32:12.366538] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:02.910 [2024-07-10 14:32:12.366561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:02.910 [2024-07-10 14:32:12.366572] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:02.910 [2024-07-10 14:32:12.366594] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:02.910 [2024-07-10 14:32:12.366607] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:02.910 [2024-07-10 14:32:12.366620] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:02.910 [2024-07-10 14:32:12.366650] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:02.910 [2024-07-10 14:32:12.366666] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:03.170 [2024-07-10 14:32:12.411446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:03.170 [2024-07-10 14:32:12.411476] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:03.170 [2024-07-10 14:32:12.411489] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:03.170 [2024-07-10 14:32:12.411501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:03.170 [2024-07-10 14:32:12.411533] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:30:03.170 [2024-07-10 14:32:12.411550] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:30:03.170 [2024-07-10 14:32:12.411562] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:30:03.170 [2024-07-10 14:32:12.411577] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:30:03.170 [2024-07-10 14:32:12.411597] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:30:03.170 [2024-07-10 14:32:12.411614] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:30:03.170 [2024-07-10 14:32:12.411652] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:30:03.170 [2024-07-10 14:32:12.411683] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:03.170 [2024-07-10 14:32:12.411699] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.170 [2024-07-10 14:32:12.411727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:03.170 [2024-07-10 14:32:12.411753] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:03.170 [2024-07-10 14:32:12.411804] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:03.170 [2024-07-10 14:32:12.411990] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:03.170 [2024-07-10 14:32:12.412015] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:03.170 [2024-07-10 14:32:12.412027] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:03.170 [2024-07-10 14:32:12.412038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:03.170 [2024-07-10 14:32:12.412058] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:03.170 [2024-07-10 14:32:12.412072] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.170 [2024-07-10 14:32:12.412083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:03.170 [2024-07-10 14:32:12.412107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.171 [2024-07-10 14:32:12.412125] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.412137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.412147] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:03.171 [2024-07-10 14:32:12.412163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.171 [2024-07-10 14:32:12.412179] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.412194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.412205] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:03.171 [2024-07-10 14:32:12.412238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.171 [2024-07-10 14:32:12.412254] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.412265] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.412275] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:03.171 [2024-07-10 14:32:12.412305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.171 [2024-07-10 14:32:12.412319] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:03.171 [2024-07-10 14:32:12.412360] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:03.171 [2024-07-10 14:32:12.412385] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.412408] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:03.171 [2024-07-10 14:32:12.412530] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.171 [2024-07-10 14:32:12.412568] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:03.171 [2024-07-10 14:32:12.412591] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:03.171 [2024-07-10 14:32:12.412605] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:03.171 [2024-07-10 14:32:12.412621] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:03.171 [2024-07-10 14:32:12.412634] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:03.171 [2024-07-10 14:32:12.412859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:03.171 [2024-07-10 14:32:12.412880] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:03.171 [2024-07-10 14:32:12.412892] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.412903] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:03.171 [2024-07-10 14:32:12.412919] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:30:03.171 [2024-07-10 14:32:12.412934] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:03.171 [2024-07-10 14:32:12.412956] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:30:03.171 [2024-07-10 14:32:12.412993] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:03.171 [2024-07-10 14:32:12.413012] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.413026] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.413037] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:03.171 [2024-07-10 14:32:12.413057] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:03.171 [2024-07-10 14:32:12.413104] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:03.171 [2024-07-10 14:32:12.413333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:03.171 [2024-07-10 14:32:12.413355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:03.171 [2024-07-10 14:32:12.413367] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.413378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:03.171 [2024-07-10 14:32:12.413498] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:30:03.171 [2024-07-10 14:32:12.413536] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:03.171 [2024-07-10 14:32:12.413563] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.413578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:03.171 [2024-07-10 14:32:12.413598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.171 [2024-07-10 14:32:12.413636] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:03.171 [2024-07-10 14:32:12.413859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:03.171 [2024-07-10 14:32:12.413879] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:03.171 [2024-07-10 14:32:12.413891] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.413902] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:03.171 [2024-07-10 14:32:12.413915] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:03.171 [2024-07-10 14:32:12.413926] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.413958] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.413978] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.456459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:03.171 [2024-07-10 14:32:12.456487] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:03.171 [2024-07-10 14:32:12.456500] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.456511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:03.171 [2024-07-10 14:32:12.456554] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:30:03.171 [2024-07-10 14:32:12.456588] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:30:03.171 [2024-07-10 14:32:12.456641] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:30:03.171 [2024-07-10 14:32:12.456670] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.456685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:03.171 [2024-07-10 14:32:12.456707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.171 [2024-07-10 14:32:12.456756] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:03.171 [2024-07-10 14:32:12.457043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:03.171 [2024-07-10 14:32:12.457065] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:03.171 [2024-07-10 14:32:12.457076] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.457087] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:03.171 [2024-07-10 14:32:12.457099] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:03.171 [2024-07-10 14:32:12.457110] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.457137] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.457151] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.497591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:03.171 [2024-07-10 14:32:12.497619] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:03.171 [2024-07-10 14:32:12.497632] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.497644] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:03.171 [2024-07-10 14:32:12.497686] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:03.171 [2024-07-10 14:32:12.497728] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:03.171 [2024-07-10 14:32:12.497757] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.497778] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:03.171 [2024-07-10 14:32:12.497799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.171 [2024-07-10 14:32:12.497835] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:03.171 [2024-07-10 14:32:12.498018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:03.171 [2024-07-10 14:32:12.498041] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:03.171 [2024-07-10 14:32:12.498052] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.498063] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:03.171 [2024-07-10 14:32:12.498080] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:03.171 [2024-07-10 14:32:12.498093] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.498120] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.498146] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.538584] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:03.171 [2024-07-10 14:32:12.538613] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:03.171 [2024-07-10 14:32:12.538625] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:03.171 [2024-07-10 14:32:12.538636] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:03.171 [2024-07-10 14:32:12.538665] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:03.171 [2024-07-10 14:32:12.538690] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:30:03.171 [2024-07-10 14:32:12.538719] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:30:03.171 [2024-07-10 14:32:12.538737] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:03.171 [2024-07-10 14:32:12.538751] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:03.171 [2024-07-10 14:32:12.538766] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:30:03.171 [2024-07-10 14:32:12.538779] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:30:03.171 [2024-07-10 14:32:12.538792] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:30:03.171 [2024-07-10 14:32:12.538805] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:30:03.172 [2024-07-10 14:32:12.538874] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.538892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:03.172 [2024-07-10 14:32:12.538917] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.172 [2024-07-10 14:32:12.538941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.538955] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.538966] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:03.172 [2024-07-10 14:32:12.538983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.172 [2024-07-10 14:32:12.539016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:03.172 [2024-07-10 14:32:12.539051] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:03.172 [2024-07-10 14:32:12.539250] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:03.172 [2024-07-10 14:32:12.539271] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:03.172 [2024-07-10 14:32:12.539283] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.539301] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:03.172 [2024-07-10 14:32:12.539324] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:03.172 [2024-07-10 14:32:12.539340] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:03.172 [2024-07-10 14:32:12.539355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.539366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:03.172 [2024-07-10 14:32:12.539391] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.539406] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:03.172 [2024-07-10 14:32:12.543449] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.172 [2024-07-10 14:32:12.543492] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:03.172 [2024-07-10 14:32:12.543698] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:03.172 [2024-07-10 14:32:12.543718] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:03.172 [2024-07-10 14:32:12.543730] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.543740] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:03.172 [2024-07-10 14:32:12.543766] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.543786] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:03.172 [2024-07-10 14:32:12.543804] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.172 [2024-07-10 14:32:12.543834] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:03.172 [2024-07-10 14:32:12.544034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:03.172 [2024-07-10 14:32:12.544055] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:03.172 [2024-07-10 14:32:12.544066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.544077] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:03.172 [2024-07-10 14:32:12.544102] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.544116] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:03.172 [2024-07-10 14:32:12.544134] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.172 [2024-07-10 14:32:12.544164] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:03.172 [2024-07-10 14:32:12.544363] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:03.172 [2024-07-10 14:32:12.544383] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:03.172 [2024-07-10 14:32:12.544394] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.544405] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:03.172 [2024-07-10 14:32:12.544456] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.544475] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:03.172 [2024-07-10 14:32:12.544495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.172 [2024-07-10 14:32:12.544517] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.544531] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:03.172 [2024-07-10 14:32:12.544549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.172 [2024-07-10 14:32:12.544570] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.544583] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000015700) 00:30:03.172 [2024-07-10 14:32:12.544606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.172 [2024-07-10 14:32:12.544631] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.544651] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:03.172 [2024-07-10 14:32:12.544673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.172 [2024-07-10 14:32:12.544707] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:03.172 [2024-07-10 14:32:12.544738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:03.172 [2024-07-10 14:32:12.544750] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:30:03.172 [2024-07-10 14:32:12.544761] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:03.172 [2024-07-10 14:32:12.545131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:03.172 [2024-07-10 14:32:12.545155] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:03.172 [2024-07-10 14:32:12.545166] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.545189] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8192, cccid=5 00:30:03.172 [2024-07-10 14:32:12.545201] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000015700): expected_datao=0, payload_size=8192 00:30:03.172 [2024-07-10 14:32:12.545213] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.545232] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.545245] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.545260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:03.172 [2024-07-10 14:32:12.545274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:03.172 [2024-07-10 14:32:12.545284] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.545295] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=4 00:30:03.172 [2024-07-10 14:32:12.545306] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:03.172 [2024-07-10 14:32:12.545317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.545332] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.545344] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.545368] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:03.172 [2024-07-10 14:32:12.545384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:03.172 [2024-07-10 14:32:12.545395] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.545406] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=6 00:30:03.172 [2024-07-10 14:32:12.545418] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:03.172 [2024-07-10 14:32:12.545441] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.545458] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.545470] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.545483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:03.172 [2024-07-10 14:32:12.545498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:03.172 [2024-07-10 14:32:12.545508] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.545538] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=7 00:30:03.172 [2024-07-10 14:32:12.545551] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:03.172 [2024-07-10 14:32:12.545561] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.545578] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.545590] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.545608] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:03.172 [2024-07-10 14:32:12.545623] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:03.172 [2024-07-10 14:32:12.545633] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.545649] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:03.172 [2024-07-10 14:32:12.545683] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:03.172 [2024-07-10 14:32:12.545714] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:03.172 [2024-07-10 14:32:12.545726] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.545744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:03.172 [2024-07-10 14:32:12.545768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:03.172 [2024-07-10 14:32:12.545785] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:03.172 [2024-07-10 14:32:12.545794] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.545804] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000015700 00:30:03.172 [2024-07-10 14:32:12.545826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:03.172 [2024-07-10 14:32:12.545843] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:03.172 [2024-07-10 14:32:12.545853] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:03.172 [2024-07-10 14:32:12.545863] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:03.172 ===================================================== 00:30:03.172 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:03.172 ===================================================== 00:30:03.172 Controller Capabilities/Features 00:30:03.172 ================================ 00:30:03.172 Vendor ID: 8086 00:30:03.172 Subsystem Vendor ID: 8086 00:30:03.172 Serial Number: SPDK00000000000001 00:30:03.172 Model Number: SPDK bdev Controller 00:30:03.172 Firmware Version: 24.09 00:30:03.172 Recommended Arb Burst: 6 00:30:03.172 IEEE OUI Identifier: e4 d2 5c 00:30:03.172 Multi-path I/O 00:30:03.173 May have multiple subsystem ports: Yes 00:30:03.173 May have multiple controllers: Yes 00:30:03.173 Associated with SR-IOV VF: No 00:30:03.173 Max Data Transfer Size: 131072 00:30:03.173 Max Number of Namespaces: 32 00:30:03.173 Max Number of I/O Queues: 127 00:30:03.173 NVMe Specification Version (VS): 1.3 00:30:03.173 NVMe Specification Version (Identify): 1.3 00:30:03.173 Maximum Queue Entries: 128 00:30:03.173 Contiguous Queues Required: Yes 00:30:03.173 Arbitration Mechanisms Supported 00:30:03.173 Weighted Round Robin: Not Supported 00:30:03.173 Vendor Specific: Not Supported 00:30:03.173 Reset Timeout: 15000 ms 00:30:03.173 Doorbell Stride: 4 bytes 00:30:03.173 NVM Subsystem Reset: Not Supported 00:30:03.173 Command Sets Supported 00:30:03.173 NVM Command Set: Supported 00:30:03.173 Boot Partition: Not Supported 00:30:03.173 Memory Page Size Minimum: 4096 bytes 00:30:03.173 Memory Page Size Maximum: 4096 bytes 00:30:03.173 Persistent Memory Region: Not Supported 00:30:03.173 Optional Asynchronous Events Supported 00:30:03.173 Namespace Attribute Notices: Supported 00:30:03.173 Firmware Activation Notices: Not Supported 00:30:03.173 ANA Change Notices: Not Supported 00:30:03.173 PLE Aggregate Log Change Notices: Not Supported 00:30:03.173 LBA Status Info Alert Notices: Not Supported 00:30:03.173 EGE Aggregate Log Change Notices: Not Supported 00:30:03.173 Normal NVM Subsystem Shutdown event: Not Supported 00:30:03.173 Zone Descriptor Change Notices: Not Supported 00:30:03.173 Discovery Log Change Notices: Not Supported 00:30:03.173 Controller Attributes 00:30:03.173 128-bit Host Identifier: Supported 00:30:03.173 Non-Operational Permissive Mode: Not Supported 00:30:03.173 NVM Sets: Not Supported 00:30:03.173 Read Recovery Levels: Not Supported 00:30:03.173 Endurance Groups: Not Supported 00:30:03.173 Predictable Latency Mode: Not Supported 00:30:03.173 Traffic Based Keep ALive: Not Supported 00:30:03.173 Namespace Granularity: Not Supported 00:30:03.173 SQ Associations: Not Supported 00:30:03.173 UUID List: Not Supported 00:30:03.173 Multi-Domain Subsystem: Not Supported 00:30:03.173 Fixed Capacity Management: Not Supported 00:30:03.173 Variable Capacity Management: Not Supported 00:30:03.173 Delete Endurance Group: Not Supported 00:30:03.173 Delete NVM Set: Not Supported 00:30:03.173 Extended LBA Formats Supported: Not Supported 00:30:03.173 Flexible Data Placement Supported: Not Supported 00:30:03.173 00:30:03.173 Controller Memory Buffer Support 00:30:03.173 ================================ 00:30:03.173 Supported: No 00:30:03.173 00:30:03.173 Persistent Memory Region Support 00:30:03.173 ================================ 00:30:03.173 Supported: No 00:30:03.173 00:30:03.173 Admin Command Set Attributes 00:30:03.173 ============================ 00:30:03.173 Security Send/Receive: Not Supported 00:30:03.173 Format NVM: Not Supported 00:30:03.173 Firmware Activate/Download: Not Supported 00:30:03.173 Namespace Management: Not Supported 00:30:03.173 Device Self-Test: Not Supported 00:30:03.173 Directives: Not Supported 00:30:03.173 NVMe-MI: Not Supported 00:30:03.173 Virtualization Management: Not Supported 00:30:03.173 Doorbell Buffer Config: Not Supported 00:30:03.173 Get LBA Status Capability: Not Supported 00:30:03.173 Command & Feature Lockdown Capability: Not Supported 00:30:03.173 Abort Command Limit: 4 00:30:03.173 Async Event Request Limit: 4 00:30:03.173 Number of Firmware Slots: N/A 00:30:03.173 Firmware Slot 1 Read-Only: N/A 00:30:03.173 Firmware Activation Without Reset: N/A 00:30:03.173 Multiple Update Detection Support: N/A 00:30:03.173 Firmware Update Granularity: No Information Provided 00:30:03.173 Per-Namespace SMART Log: No 00:30:03.173 Asymmetric Namespace Access Log Page: Not Supported 00:30:03.173 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:03.173 Command Effects Log Page: Supported 00:30:03.173 Get Log Page Extended Data: Supported 00:30:03.173 Telemetry Log Pages: Not Supported 00:30:03.173 Persistent Event Log Pages: Not Supported 00:30:03.173 Supported Log Pages Log Page: May Support 00:30:03.173 Commands Supported & Effects Log Page: Not Supported 00:30:03.173 Feature Identifiers & Effects Log Page:May Support 00:30:03.173 NVMe-MI Commands & Effects Log Page: May Support 00:30:03.173 Data Area 4 for Telemetry Log: Not Supported 00:30:03.173 Error Log Page Entries Supported: 128 00:30:03.173 Keep Alive: Supported 00:30:03.173 Keep Alive Granularity: 10000 ms 00:30:03.173 00:30:03.173 NVM Command Set Attributes 00:30:03.173 ========================== 00:30:03.173 Submission Queue Entry Size 00:30:03.173 Max: 64 00:30:03.173 Min: 64 00:30:03.173 Completion Queue Entry Size 00:30:03.173 Max: 16 00:30:03.173 Min: 16 00:30:03.173 Number of Namespaces: 32 00:30:03.173 Compare Command: Supported 00:30:03.173 Write Uncorrectable Command: Not Supported 00:30:03.173 Dataset Management Command: Supported 00:30:03.173 Write Zeroes Command: Supported 00:30:03.173 Set Features Save Field: Not Supported 00:30:03.173 Reservations: Supported 00:30:03.173 Timestamp: Not Supported 00:30:03.173 Copy: Supported 00:30:03.173 Volatile Write Cache: Present 00:30:03.173 Atomic Write Unit (Normal): 1 00:30:03.173 Atomic Write Unit (PFail): 1 00:30:03.173 Atomic Compare & Write Unit: 1 00:30:03.173 Fused Compare & Write: Supported 00:30:03.173 Scatter-Gather List 00:30:03.173 SGL Command Set: Supported 00:30:03.173 SGL Keyed: Supported 00:30:03.173 SGL Bit Bucket Descriptor: Not Supported 00:30:03.173 SGL Metadata Pointer: Not Supported 00:30:03.173 Oversized SGL: Not Supported 00:30:03.173 SGL Metadata Address: Not Supported 00:30:03.173 SGL Offset: Supported 00:30:03.173 Transport SGL Data Block: Not Supported 00:30:03.173 Replay Protected Memory Block: Not Supported 00:30:03.173 00:30:03.173 Firmware Slot Information 00:30:03.173 ========================= 00:30:03.173 Active slot: 1 00:30:03.173 Slot 1 Firmware Revision: 24.09 00:30:03.173 00:30:03.173 00:30:03.173 Commands Supported and Effects 00:30:03.173 ============================== 00:30:03.173 Admin Commands 00:30:03.173 -------------- 00:30:03.173 Get Log Page (02h): Supported 00:30:03.173 Identify (06h): Supported 00:30:03.173 Abort (08h): Supported 00:30:03.173 Set Features (09h): Supported 00:30:03.173 Get Features (0Ah): Supported 00:30:03.173 Asynchronous Event Request (0Ch): Supported 00:30:03.173 Keep Alive (18h): Supported 00:30:03.173 I/O Commands 00:30:03.173 ------------ 00:30:03.173 Flush (00h): Supported LBA-Change 00:30:03.173 Write (01h): Supported LBA-Change 00:30:03.173 Read (02h): Supported 00:30:03.173 Compare (05h): Supported 00:30:03.173 Write Zeroes (08h): Supported LBA-Change 00:30:03.173 Dataset Management (09h): Supported LBA-Change 00:30:03.173 Copy (19h): Supported LBA-Change 00:30:03.173 00:30:03.173 Error Log 00:30:03.173 ========= 00:30:03.173 00:30:03.173 Arbitration 00:30:03.173 =========== 00:30:03.173 Arbitration Burst: 1 00:30:03.173 00:30:03.173 Power Management 00:30:03.173 ================ 00:30:03.173 Number of Power States: 1 00:30:03.173 Current Power State: Power State #0 00:30:03.173 Power State #0: 00:30:03.173 Max Power: 0.00 W 00:30:03.173 Non-Operational State: Operational 00:30:03.173 Entry Latency: Not Reported 00:30:03.173 Exit Latency: Not Reported 00:30:03.173 Relative Read Throughput: 0 00:30:03.173 Relative Read Latency: 0 00:30:03.173 Relative Write Throughput: 0 00:30:03.173 Relative Write Latency: 0 00:30:03.173 Idle Power: Not Reported 00:30:03.173 Active Power: Not Reported 00:30:03.173 Non-Operational Permissive Mode: Not Supported 00:30:03.173 00:30:03.173 Health Information 00:30:03.173 ================== 00:30:03.173 Critical Warnings: 00:30:03.173 Available Spare Space: OK 00:30:03.173 Temperature: OK 00:30:03.173 Device Reliability: OK 00:30:03.173 Read Only: No 00:30:03.173 Volatile Memory Backup: OK 00:30:03.173 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:03.173 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:03.173 Available Spare: 0% 00:30:03.173 Available Spare Threshold: 0% 00:30:03.173 Life Percentage Used:[2024-07-10 14:32:12.546071] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.173 [2024-07-10 14:32:12.546090] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:03.173 [2024-07-10 14:32:12.546110] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.173 [2024-07-10 14:32:12.546142] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:03.173 [2024-07-10 14:32:12.546349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:03.173 [2024-07-10 14:32:12.546371] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:03.173 [2024-07-10 14:32:12.546383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:03.173 [2024-07-10 14:32:12.546395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:03.173 [2024-07-10 14:32:12.546489] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:30:03.173 [2024-07-10 14:32:12.546521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:03.173 [2024-07-10 14:32:12.546548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.173 [2024-07-10 14:32:12.546563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:03.173 [2024-07-10 14:32:12.546577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.173 [2024-07-10 14:32:12.546590] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:03.174 [2024-07-10 14:32:12.546609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.174 [2024-07-10 14:32:12.546623] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:03.174 [2024-07-10 14:32:12.546637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.174 [2024-07-10 14:32:12.546666] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:03.174 [2024-07-10 14:32:12.546682] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.174 [2024-07-10 14:32:12.546694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:03.174 [2024-07-10 14:32:12.546733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.174 [2024-07-10 14:32:12.546768] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:03.174 [2024-07-10 14:32:12.546977] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:03.174 [2024-07-10 14:32:12.547000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:03.174 [2024-07-10 14:32:12.547012] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:03.174 [2024-07-10 14:32:12.547023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:03.174 [2024-07-10 14:32:12.547061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:03.174 [2024-07-10 14:32:12.547075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.174 [2024-07-10 14:32:12.547087] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:03.174 [2024-07-10 14:32:12.547106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.174 [2024-07-10 14:32:12.547145] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:03.174 [2024-07-10 14:32:12.547355] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:03.174 [2024-07-10 14:32:12.547380] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:03.174 [2024-07-10 14:32:12.547391] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:03.174 [2024-07-10 14:32:12.547402] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:03.174 [2024-07-10 14:32:12.547416] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:30:03.174 [2024-07-10 14:32:12.551447] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:30:03.174 [2024-07-10 14:32:12.551496] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:03.174 [2024-07-10 14:32:12.551519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:03.174 [2024-07-10 14:32:12.551531] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:03.174 [2024-07-10 14:32:12.551555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.174 [2024-07-10 14:32:12.551589] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:03.174 [2024-07-10 14:32:12.551796] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:03.174 [2024-07-10 14:32:12.551816] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:03.174 [2024-07-10 14:32:12.551827] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:03.174 [2024-07-10 14:32:12.551838] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:03.174 [2024-07-10 14:32:12.551860] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:30:03.174 0% 00:30:03.174 Data Units Read: 0 00:30:03.174 Data Units Written: 0 00:30:03.174 Host Read Commands: 0 00:30:03.174 Host Write Commands: 0 00:30:03.174 Controller Busy Time: 0 minutes 00:30:03.174 Power Cycles: 0 00:30:03.174 Power On Hours: 0 hours 00:30:03.174 Unsafe Shutdowns: 0 00:30:03.174 Unrecoverable Media Errors: 0 00:30:03.174 Lifetime Error Log Entries: 0 00:30:03.174 Warning Temperature Time: 0 minutes 00:30:03.174 Critical Temperature Time: 0 minutes 00:30:03.174 00:30:03.174 Number of Queues 00:30:03.174 ================ 00:30:03.174 Number of I/O Submission Queues: 127 00:30:03.174 Number of I/O Completion Queues: 127 00:30:03.174 00:30:03.174 Active Namespaces 00:30:03.174 ================= 00:30:03.174 Namespace ID:1 00:30:03.174 Error Recovery Timeout: Unlimited 00:30:03.174 Command Set Identifier: NVM (00h) 00:30:03.174 Deallocate: Supported 00:30:03.174 Deallocated/Unwritten Error: Not Supported 00:30:03.174 Deallocated Read Value: Unknown 00:30:03.174 Deallocate in Write Zeroes: Not Supported 00:30:03.174 Deallocated Guard Field: 0xFFFF 00:30:03.174 Flush: Supported 00:30:03.174 Reservation: Supported 00:30:03.174 Namespace Sharing Capabilities: Multiple Controllers 00:30:03.174 Size (in LBAs): 131072 (0GiB) 00:30:03.174 Capacity (in LBAs): 131072 (0GiB) 00:30:03.174 Utilization (in LBAs): 131072 (0GiB) 00:30:03.174 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:03.174 EUI64: ABCDEF0123456789 00:30:03.174 UUID: 25f0c380-7fc8-4eb5-8b49-a45cf09e1d42 00:30:03.174 Thin Provisioning: Not Supported 00:30:03.174 Per-NS Atomic Units: Yes 00:30:03.174 Atomic Boundary Size (Normal): 0 00:30:03.174 Atomic Boundary Size (PFail): 0 00:30:03.174 Atomic Boundary Offset: 0 00:30:03.174 Maximum Single Source Range Length: 65535 00:30:03.174 Maximum Copy Length: 65535 00:30:03.174 Maximum Source Range Count: 1 00:30:03.174 NGUID/EUI64 Never Reused: No 00:30:03.174 Namespace Write Protected: No 00:30:03.174 Number of LBA Formats: 1 00:30:03.174 Current LBA Format: LBA Format #00 00:30:03.174 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:03.174 00:30:03.174 14:32:12 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:03.174 14:32:12 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:03.174 14:32:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.174 14:32:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:03.174 14:32:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.174 14:32:12 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:03.174 14:32:12 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:03.174 14:32:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:03.174 14:32:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:30:03.174 14:32:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:03.174 14:32:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:30:03.174 14:32:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:03.174 14:32:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:03.174 rmmod nvme_tcp 00:30:03.174 rmmod nvme_fabrics 00:30:03.432 rmmod nvme_keyring 00:30:03.432 14:32:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:03.432 14:32:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:30:03.432 14:32:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:30:03.432 14:32:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1486317 ']' 00:30:03.432 14:32:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1486317 00:30:03.432 14:32:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1486317 ']' 00:30:03.432 14:32:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1486317 00:30:03.432 14:32:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:30:03.432 14:32:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:03.432 14:32:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1486317 00:30:03.432 14:32:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:03.432 14:32:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:03.432 14:32:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1486317' 00:30:03.432 killing process with pid 1486317 00:30:03.432 14:32:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1486317 00:30:03.432 14:32:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1486317 00:30:04.820 14:32:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:04.820 14:32:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:04.820 14:32:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:04.820 14:32:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:04.820 14:32:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:04.820 14:32:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.820 14:32:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:04.820 14:32:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.720 14:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:06.720 00:30:06.720 real 0m7.526s 00:30:06.720 user 0m10.683s 00:30:06.720 sys 0m2.164s 00:30:06.720 14:32:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:06.720 14:32:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.720 ************************************ 00:30:06.720 END TEST nvmf_identify 00:30:06.720 ************************************ 00:30:06.979 14:32:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:06.979 14:32:16 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:06.979 14:32:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:06.979 14:32:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:06.979 14:32:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:06.979 ************************************ 00:30:06.979 START TEST nvmf_perf 00:30:06.979 ************************************ 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:06.979 * Looking for test storage... 00:30:06.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:30:06.979 14:32:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:08.877 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:08.877 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:30:08.877 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:08.877 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:08.877 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:08.877 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:08.877 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:08.878 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:08.878 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:08.878 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:08.878 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:08.878 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:08.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:08.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:30:08.878 00:30:08.878 --- 10.0.0.2 ping statistics --- 00:30:08.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.878 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:08.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:08.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:30:08.879 00:30:08.879 --- 10.0.0.1 ping statistics --- 00:30:08.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.879 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1488644 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1488644 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1488644 ']' 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:08.879 14:32:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:09.136 [2024-07-10 14:32:18.428784] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:30:09.136 [2024-07-10 14:32:18.428915] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.136 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.136 [2024-07-10 14:32:18.559394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:09.393 [2024-07-10 14:32:18.791814] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:09.393 [2024-07-10 14:32:18.791872] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:09.393 [2024-07-10 14:32:18.791895] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:09.393 [2024-07-10 14:32:18.791911] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:09.393 [2024-07-10 14:32:18.791932] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:09.393 [2024-07-10 14:32:18.792046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.393 [2024-07-10 14:32:18.792127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:09.393 [2024-07-10 14:32:18.792168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.393 [2024-07-10 14:32:18.792180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:09.956 14:32:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:09.956 14:32:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:30:09.956 14:32:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:09.956 14:32:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:09.956 14:32:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:09.956 14:32:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:09.956 14:32:19 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:09.956 14:32:19 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:13.232 14:32:22 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:13.232 14:32:22 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:13.489 14:32:22 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:13.489 14:32:22 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:13.745 14:32:23 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:13.745 14:32:23 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:13.745 14:32:23 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:13.745 14:32:23 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:13.745 14:32:23 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:14.001 [2024-07-10 14:32:23.322585] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:14.001 14:32:23 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:14.258 14:32:23 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:14.259 14:32:23 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:14.515 14:32:23 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:14.515 14:32:23 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:14.773 14:32:24 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:15.031 [2024-07-10 14:32:24.328284] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.031 14:32:24 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:15.288 14:32:24 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:15.288 14:32:24 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:15.288 14:32:24 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:15.288 14:32:24 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:16.660 Initializing NVMe Controllers 00:30:16.660 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:30:16.660 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:30:16.660 Initialization complete. Launching workers. 00:30:16.660 ======================================================== 00:30:16.660 Latency(us) 00:30:16.660 Device Information : IOPS MiB/s Average min max 00:30:16.660 PCIE (0000:88:00.0) NSID 1 from core 0: 73849.33 288.47 432.42 53.23 4441.02 00:30:16.660 ======================================================== 00:30:16.660 Total : 73849.33 288.47 432.42 53.23 4441.02 00:30:16.660 00:30:16.660 14:32:26 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:16.917 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.288 Initializing NVMe Controllers 00:30:18.288 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:18.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:18.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:18.288 Initialization complete. Launching workers. 00:30:18.288 ======================================================== 00:30:18.288 Latency(us) 00:30:18.288 Device Information : IOPS MiB/s Average min max 00:30:18.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 74.00 0.29 14099.52 238.39 46352.40 00:30:18.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 55.00 0.21 18843.25 7938.81 47909.00 00:30:18.288 ======================================================== 00:30:18.288 Total : 129.00 0.50 16122.04 238.39 47909.00 00:30:18.288 00:30:18.288 14:32:27 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:18.288 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.662 Initializing NVMe Controllers 00:30:19.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:19.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:19.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:19.662 Initialization complete. Launching workers. 00:30:19.662 ======================================================== 00:30:19.662 Latency(us) 00:30:19.662 Device Information : IOPS MiB/s Average min max 00:30:19.662 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5506.26 21.51 5812.62 632.96 11464.84 00:30:19.662 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3878.68 15.15 8281.11 4644.08 16805.47 00:30:19.662 ======================================================== 00:30:19.662 Total : 9384.94 36.66 6832.81 632.96 16805.47 00:30:19.662 00:30:19.920 14:32:29 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:19.920 14:32:29 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:19.920 14:32:29 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:19.920 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.319 Initializing NVMe Controllers 00:30:23.319 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.319 Controller IO queue size 128, less than required. 00:30:23.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.319 Controller IO queue size 128, less than required. 00:30:23.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:23.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:23.319 Initialization complete. Launching workers. 00:30:23.319 ======================================================== 00:30:23.319 Latency(us) 00:30:23.319 Device Information : IOPS MiB/s Average min max 00:30:23.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 822.35 205.59 170116.11 87528.49 447765.21 00:30:23.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 525.90 131.48 248493.51 131886.60 413710.52 00:30:23.319 ======================================================== 00:30:23.319 Total : 1348.25 337.06 200688.24 87528.49 447765.21 00:30:23.319 00:30:23.319 14:32:32 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:23.319 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.319 No valid NVMe controllers or AIO or URING devices found 00:30:23.319 Initializing NVMe Controllers 00:30:23.319 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.319 Controller IO queue size 128, less than required. 00:30:23.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.319 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:23.319 Controller IO queue size 128, less than required. 00:30:23.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.319 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:23.319 WARNING: Some requested NVMe devices were skipped 00:30:23.319 14:32:32 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:23.320 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.596 Initializing NVMe Controllers 00:30:26.596 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:26.596 Controller IO queue size 128, less than required. 00:30:26.596 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:26.596 Controller IO queue size 128, less than required. 00:30:26.596 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:26.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:26.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:26.596 Initialization complete. Launching workers. 00:30:26.596 00:30:26.596 ==================== 00:30:26.596 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:26.596 TCP transport: 00:30:26.596 polls: 12435 00:30:26.596 idle_polls: 5771 00:30:26.596 sock_completions: 6664 00:30:26.596 nvme_completions: 3959 00:30:26.596 submitted_requests: 5994 00:30:26.596 queued_requests: 1 00:30:26.596 00:30:26.596 ==================== 00:30:26.596 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:26.596 TCP transport: 00:30:26.596 polls: 16195 00:30:26.596 idle_polls: 7176 00:30:26.596 sock_completions: 9019 00:30:26.596 nvme_completions: 4019 00:30:26.596 submitted_requests: 6044 00:30:26.596 queued_requests: 1 00:30:26.596 ======================================================== 00:30:26.596 Latency(us) 00:30:26.596 Device Information : IOPS MiB/s Average min max 00:30:26.596 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 989.50 247.37 134911.04 69666.61 294849.27 00:30:26.596 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1004.50 251.12 133465.47 80450.68 438033.19 00:30:26.596 ======================================================== 00:30:26.596 Total : 1993.99 498.50 134182.82 69666.61 438033.19 00:30:26.596 00:30:26.596 14:32:35 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:26.596 14:32:35 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:26.596 14:32:36 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:26.596 14:32:36 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:30:26.853 14:32:36 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:30.132 14:32:39 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=a2cfaa73-4a72-4e86-8fc4-9d15b3b0b093 00:30:30.132 14:32:39 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb a2cfaa73-4a72-4e86-8fc4-9d15b3b0b093 00:30:30.132 14:32:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=a2cfaa73-4a72-4e86-8fc4-9d15b3b0b093 00:30:30.132 14:32:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:30.132 14:32:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:30.132 14:32:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:30.132 14:32:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:30.132 14:32:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:30.132 { 00:30:30.132 "uuid": "a2cfaa73-4a72-4e86-8fc4-9d15b3b0b093", 00:30:30.132 "name": "lvs_0", 00:30:30.132 "base_bdev": "Nvme0n1", 00:30:30.132 "total_data_clusters": 238234, 00:30:30.132 "free_clusters": 238234, 00:30:30.132 "block_size": 512, 00:30:30.132 "cluster_size": 4194304 00:30:30.132 } 00:30:30.132 ]' 00:30:30.132 14:32:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="a2cfaa73-4a72-4e86-8fc4-9d15b3b0b093") .free_clusters' 00:30:30.390 14:32:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:30:30.390 14:32:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="a2cfaa73-4a72-4e86-8fc4-9d15b3b0b093") .cluster_size' 00:30:30.390 14:32:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:30.390 14:32:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:30:30.390 14:32:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:30:30.390 952936 00:30:30.390 14:32:39 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:30.390 14:32:39 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:30.390 14:32:39 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a2cfaa73-4a72-4e86-8fc4-9d15b3b0b093 lbd_0 20480 00:30:30.954 14:32:40 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=8f08e119-a54a-444a-82d0-64f304c2b7a3 00:30:30.954 14:32:40 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 8f08e119-a54a-444a-82d0-64f304c2b7a3 lvs_n_0 00:30:31.885 14:32:41 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=196524e0-f2b3-44bb-b966-e2dc1b05e2fc 00:30:31.885 14:32:41 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 196524e0-f2b3-44bb-b966-e2dc1b05e2fc 00:30:31.885 14:32:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=196524e0-f2b3-44bb-b966-e2dc1b05e2fc 00:30:31.885 14:32:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:31.885 14:32:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:31.885 14:32:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:31.885 14:32:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:32.142 14:32:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:32.142 { 00:30:32.142 "uuid": "a2cfaa73-4a72-4e86-8fc4-9d15b3b0b093", 00:30:32.142 "name": "lvs_0", 00:30:32.142 "base_bdev": "Nvme0n1", 00:30:32.142 "total_data_clusters": 238234, 00:30:32.142 "free_clusters": 233114, 00:30:32.142 "block_size": 512, 00:30:32.142 "cluster_size": 4194304 00:30:32.142 }, 00:30:32.142 { 00:30:32.142 "uuid": "196524e0-f2b3-44bb-b966-e2dc1b05e2fc", 00:30:32.142 "name": "lvs_n_0", 00:30:32.142 "base_bdev": "8f08e119-a54a-444a-82d0-64f304c2b7a3", 00:30:32.142 "total_data_clusters": 5114, 00:30:32.142 "free_clusters": 5114, 00:30:32.142 "block_size": 512, 00:30:32.142 "cluster_size": 4194304 00:30:32.142 } 00:30:32.142 ]' 00:30:32.142 14:32:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="196524e0-f2b3-44bb-b966-e2dc1b05e2fc") .free_clusters' 00:30:32.142 14:32:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:30:32.142 14:32:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="196524e0-f2b3-44bb-b966-e2dc1b05e2fc") .cluster_size' 00:30:32.142 14:32:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:32.142 14:32:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:30:32.142 14:32:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:30:32.142 20456 00:30:32.142 14:32:41 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:32.142 14:32:41 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 196524e0-f2b3-44bb-b966-e2dc1b05e2fc lbd_nest_0 20456 00:30:32.399 14:32:41 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=171df0af-dbee-454d-8a5d-9d046e63027d 00:30:32.399 14:32:41 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:32.657 14:32:41 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:32.657 14:32:41 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 171df0af-dbee-454d-8a5d-9d046e63027d 00:30:32.914 14:32:42 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:33.171 14:32:42 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:33.171 14:32:42 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:33.171 14:32:42 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:33.171 14:32:42 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:33.171 14:32:42 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:33.171 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.365 Initializing NVMe Controllers 00:30:45.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:45.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:45.365 Initialization complete. Launching workers. 00:30:45.365 ======================================================== 00:30:45.365 Latency(us) 00:30:45.365 Device Information : IOPS MiB/s Average min max 00:30:45.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.70 0.02 21963.18 289.52 47894.69 00:30:45.365 ======================================================== 00:30:45.365 Total : 45.70 0.02 21963.18 289.52 47894.69 00:30:45.365 00:30:45.365 14:32:53 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:45.365 14:32:53 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:45.365 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.330 Initializing NVMe Controllers 00:30:55.330 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:55.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:55.330 Initialization complete. Launching workers. 00:30:55.330 ======================================================== 00:30:55.330 Latency(us) 00:30:55.330 Device Information : IOPS MiB/s Average min max 00:30:55.330 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.80 9.97 12537.33 3985.34 47887.55 00:30:55.330 ======================================================== 00:30:55.330 Total : 79.80 9.97 12537.33 3985.34 47887.55 00:30:55.330 00:30:55.330 14:33:03 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:55.330 14:33:03 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:55.330 14:33:03 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:55.330 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.294 Initializing NVMe Controllers 00:31:05.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:05.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:05.294 Initialization complete. Launching workers. 00:31:05.294 ======================================================== 00:31:05.294 Latency(us) 00:31:05.294 Device Information : IOPS MiB/s Average min max 00:31:05.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4673.60 2.28 6848.69 490.01 13161.75 00:31:05.294 ======================================================== 00:31:05.294 Total : 4673.60 2.28 6848.69 490.01 13161.75 00:31:05.294 00:31:05.294 14:33:13 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:05.294 14:33:13 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:05.294 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.262 Initializing NVMe Controllers 00:31:15.262 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:15.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:15.262 Initialization complete. Launching workers. 00:31:15.262 ======================================================== 00:31:15.262 Latency(us) 00:31:15.262 Device Information : IOPS MiB/s Average min max 00:31:15.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1945.20 243.15 16457.87 2068.92 51136.65 00:31:15.262 ======================================================== 00:31:15.262 Total : 1945.20 243.15 16457.87 2068.92 51136.65 00:31:15.262 00:31:15.262 14:33:24 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:15.262 14:33:24 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:15.262 14:33:24 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:15.262 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.456 Initializing NVMe Controllers 00:31:27.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:27.456 Controller IO queue size 128, less than required. 00:31:27.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:27.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:27.456 Initialization complete. Launching workers. 00:31:27.457 ======================================================== 00:31:27.457 Latency(us) 00:31:27.457 Device Information : IOPS MiB/s Average min max 00:31:27.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8493.48 4.15 15070.41 1874.15 34797.00 00:31:27.457 ======================================================== 00:31:27.457 Total : 8493.48 4.15 15070.41 1874.15 34797.00 00:31:27.457 00:31:27.457 14:33:34 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:27.457 14:33:34 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:27.457 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.481 Initializing NVMe Controllers 00:31:37.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:37.481 Controller IO queue size 128, less than required. 00:31:37.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:37.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:37.481 Initialization complete. Launching workers. 00:31:37.481 ======================================================== 00:31:37.481 Latency(us) 00:31:37.481 Device Information : IOPS MiB/s Average min max 00:31:37.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1159.40 144.92 110963.74 15994.20 248097.73 00:31:37.481 ======================================================== 00:31:37.481 Total : 1159.40 144.92 110963.74 15994.20 248097.73 00:31:37.481 00:31:37.481 14:33:45 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:37.481 14:33:45 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 171df0af-dbee-454d-8a5d-9d046e63027d 00:31:37.481 14:33:46 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:37.481 14:33:46 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8f08e119-a54a-444a-82d0-64f304c2b7a3 00:31:37.481 14:33:46 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:37.739 14:33:47 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:37.739 14:33:47 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:37.739 14:33:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:37.739 14:33:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:31:37.739 14:33:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:37.739 14:33:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:31:37.739 14:33:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:37.739 14:33:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:37.997 rmmod nvme_tcp 00:31:37.997 rmmod nvme_fabrics 00:31:37.997 rmmod nvme_keyring 00:31:37.997 14:33:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:37.997 14:33:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:31:37.997 14:33:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:31:37.997 14:33:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1488644 ']' 00:31:37.997 14:33:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1488644 00:31:37.997 14:33:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1488644 ']' 00:31:37.997 14:33:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1488644 00:31:37.997 14:33:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:31:37.997 14:33:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:37.997 14:33:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1488644 00:31:37.997 14:33:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:37.997 14:33:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:37.997 14:33:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1488644' 00:31:37.997 killing process with pid 1488644 00:31:37.997 14:33:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1488644 00:31:37.997 14:33:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1488644 00:31:40.522 14:33:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:40.522 14:33:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:40.522 14:33:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:40.522 14:33:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:40.522 14:33:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:40.522 14:33:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.522 14:33:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:40.522 14:33:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.422 14:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:42.422 00:31:42.422 real 1m35.656s 00:31:42.422 user 5m54.244s 00:31:42.422 sys 0m15.475s 00:31:42.422 14:33:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:42.422 14:33:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:42.422 ************************************ 00:31:42.422 END TEST nvmf_perf 00:31:42.422 ************************************ 00:31:42.681 14:33:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:42.681 14:33:51 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:42.681 14:33:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:42.681 14:33:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:42.681 14:33:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:42.681 ************************************ 00:31:42.681 START TEST nvmf_fio_host 00:31:42.681 ************************************ 00:31:42.681 14:33:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:42.681 * Looking for test storage... 00:31:42.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:42.681 14:33:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:44.581 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:44.581 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:44.581 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:44.581 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:44.581 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:44.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:44.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:31:44.582 00:31:44.582 --- 10.0.0.2 ping statistics --- 00:31:44.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.582 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:44.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:44.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:31:44.582 00:31:44.582 --- 10.0.0.1 ping statistics --- 00:31:44.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.582 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:44.582 14:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:44.582 14:33:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:44.582 14:33:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:44.582 14:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:44.582 14:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.582 14:33:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1501863 00:31:44.582 14:33:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:44.582 14:33:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:44.582 14:33:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1501863 00:31:44.582 14:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1501863 ']' 00:31:44.582 14:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.582 14:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:44.582 14:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.582 14:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:44.582 14:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.840 [2024-07-10 14:33:54.100524] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:31:44.840 [2024-07-10 14:33:54.100668] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:44.840 EAL: No free 2048 kB hugepages reported on node 1 00:31:44.840 [2024-07-10 14:33:54.238934] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:45.099 [2024-07-10 14:33:54.503186] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:45.099 [2024-07-10 14:33:54.503265] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:45.099 [2024-07-10 14:33:54.503293] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:45.099 [2024-07-10 14:33:54.503315] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:45.099 [2024-07-10 14:33:54.503337] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:45.099 [2024-07-10 14:33:54.503511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:45.099 [2024-07-10 14:33:54.503545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:45.099 [2024-07-10 14:33:54.503602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.099 [2024-07-10 14:33:54.503613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:45.665 14:33:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:45.665 14:33:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:31:45.665 14:33:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:45.922 [2024-07-10 14:33:55.320482] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:45.922 14:33:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:45.922 14:33:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:45.922 14:33:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.922 14:33:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:46.180 Malloc1 00:31:46.438 14:33:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:46.696 14:33:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:46.954 14:33:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:46.954 [2024-07-10 14:33:56.406125] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.954 14:33:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:47.213 14:33:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:47.213 14:33:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:47.213 14:33:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:47.213 14:33:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:47.213 14:33:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:47.213 14:33:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:47.213 14:33:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:47.213 14:33:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:47.213 14:33:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:47.213 14:33:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:47.213 14:33:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:47.213 14:33:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:47.213 14:33:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:47.213 14:33:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:47.213 14:33:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:47.213 14:33:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:31:47.214 14:33:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:47.214 14:33:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:47.471 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:47.471 fio-3.35 00:31:47.471 Starting 1 thread 00:31:47.729 EAL: No free 2048 kB hugepages reported on node 1 00:31:50.249 00:31:50.249 test: (groupid=0, jobs=1): err= 0: pid=1502241: Wed Jul 10 14:33:59 2024 00:31:50.249 read: IOPS=6514, BW=25.4MiB/s (26.7MB/s)(51.1MiB/2008msec) 00:31:50.249 slat (usec): min=2, max=123, avg= 3.53, stdev= 1.88 00:31:50.249 clat (usec): min=3473, max=18776, avg=10797.59, stdev=888.18 00:31:50.249 lat (usec): min=3504, max=18780, avg=10801.12, stdev=888.10 00:31:50.249 clat percentiles (usec): 00:31:50.249 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:31:50.249 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:31:50.249 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:31:50.249 | 99.00th=[12780], 99.50th=[13173], 99.90th=[17695], 99.95th=[17957], 00:31:50.249 | 99.99th=[18744] 00:31:50.249 bw ( KiB/s): min=24792, max=26824, per=99.86%, avg=26022.00, stdev=876.90, samples=4 00:31:50.249 iops : min= 6198, max= 6706, avg=6505.50, stdev=219.23, samples=4 00:31:50.249 write: IOPS=6523, BW=25.5MiB/s (26.7MB/s)(51.2MiB/2008msec); 0 zone resets 00:31:50.249 slat (usec): min=3, max=120, avg= 3.74, stdev= 1.64 00:31:50.249 clat (usec): min=1630, max=16758, avg=8720.77, stdev=757.51 00:31:50.249 lat (usec): min=1644, max=16763, avg=8724.52, stdev=757.47 00:31:50.249 clat percentiles (usec): 00:31:50.249 | 1.00th=[ 7046], 5.00th=[ 7635], 10.00th=[ 7898], 20.00th=[ 8160], 00:31:50.249 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8848], 00:31:50.249 | 70.00th=[ 9110], 80.00th=[ 9241], 90.00th=[ 9503], 95.00th=[ 9765], 00:31:50.249 | 99.00th=[10290], 99.50th=[10683], 99.90th=[13698], 99.95th=[16188], 00:31:50.249 | 99.99th=[16712] 00:31:50.249 bw ( KiB/s): min=25792, max=26368, per=99.96%, avg=26086.00, stdev=235.82, samples=4 00:31:50.249 iops : min= 6448, max= 6592, avg=6521.50, stdev=58.95, samples=4 00:31:50.249 lat (msec) : 2=0.01%, 4=0.08%, 10=56.75%, 20=43.16% 00:31:50.249 cpu : usr=63.23%, sys=32.09%, ctx=68, majf=0, minf=1537 00:31:50.249 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:50.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:50.249 issued rwts: total=13081,13100,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.249 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:50.249 00:31:50.249 Run status group 0 (all jobs): 00:31:50.249 READ: bw=25.4MiB/s (26.7MB/s), 25.4MiB/s-25.4MiB/s (26.7MB/s-26.7MB/s), io=51.1MiB (53.6MB), run=2008-2008msec 00:31:50.249 WRITE: bw=25.5MiB/s (26.7MB/s), 25.5MiB/s-25.5MiB/s (26.7MB/s-26.7MB/s), io=51.2MiB (53.7MB), run=2008-2008msec 00:31:50.249 ----------------------------------------------------- 00:31:50.249 Suppressions used: 00:31:50.249 count bytes template 00:31:50.249 1 57 /usr/src/fio/parse.c 00:31:50.249 1 8 libtcmalloc_minimal.so 00:31:50.249 ----------------------------------------------------- 00:31:50.249 00:31:50.249 14:33:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:50.249 14:33:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:50.249 14:33:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:50.249 14:33:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:50.249 14:33:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:50.249 14:33:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:50.249 14:33:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:50.249 14:33:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:50.249 14:33:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:50.249 14:33:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:50.249 14:33:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:50.249 14:33:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:50.250 14:33:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:50.250 14:33:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:50.250 14:33:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:31:50.250 14:33:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:50.250 14:33:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:50.250 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:50.250 fio-3.35 00:31:50.250 Starting 1 thread 00:31:50.506 EAL: No free 2048 kB hugepages reported on node 1 00:31:53.029 00:31:53.029 test: (groupid=0, jobs=1): err= 0: pid=1502670: Wed Jul 10 14:34:02 2024 00:31:53.029 read: IOPS=6191, BW=96.7MiB/s (101MB/s)(195MiB/2018msec) 00:31:53.029 slat (usec): min=3, max=107, avg= 4.99, stdev= 2.01 00:31:53.029 clat (usec): min=4144, max=32717, avg=12101.75, stdev=2806.47 00:31:53.029 lat (usec): min=4149, max=32722, avg=12106.74, stdev=2806.47 00:31:53.029 clat percentiles (usec): 00:31:53.029 | 1.00th=[ 6325], 5.00th=[ 7898], 10.00th=[ 8848], 20.00th=[ 9896], 00:31:53.029 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11731], 60.00th=[12518], 00:31:53.029 | 70.00th=[13173], 80.00th=[14091], 90.00th=[15926], 95.00th=[16909], 00:31:53.029 | 99.00th=[19792], 99.50th=[21627], 99.90th=[26084], 99.95th=[26870], 00:31:53.029 | 99.99th=[27395] 00:31:53.029 bw ( KiB/s): min=39552, max=56608, per=49.46%, avg=48992.00, stdev=7882.10, samples=4 00:31:53.029 iops : min= 2472, max= 3538, avg=3062.00, stdev=492.63, samples=4 00:31:53.029 write: IOPS=3501, BW=54.7MiB/s (57.4MB/s)(101MiB/1845msec); 0 zone resets 00:31:53.029 slat (usec): min=33, max=148, avg=36.85, stdev= 5.88 00:31:53.029 clat (usec): min=7962, max=34919, avg=15679.06, stdev=3266.93 00:31:53.029 lat (usec): min=8011, max=34954, avg=15715.91, stdev=3266.91 00:31:53.029 clat percentiles (usec): 00:31:53.029 | 1.00th=[10421], 5.00th=[11469], 10.00th=[12256], 20.00th=[13173], 00:31:53.029 | 30.00th=[13829], 40.00th=[14353], 50.00th=[15008], 60.00th=[15795], 00:31:53.029 | 70.00th=[16712], 80.00th=[17957], 90.00th=[19792], 95.00th=[21890], 00:31:53.029 | 99.00th=[26346], 99.50th=[27919], 99.90th=[33817], 99.95th=[34341], 00:31:53.029 | 99.99th=[34866] 00:31:53.029 bw ( KiB/s): min=40832, max=58976, per=91.15%, avg=51064.00, stdev=8205.38, samples=4 00:31:53.029 iops : min= 2552, max= 3686, avg=3191.50, stdev=512.84, samples=4 00:31:53.029 lat (msec) : 10=13.81%, 20=82.46%, 50=3.73% 00:31:53.029 cpu : usr=74.32%, sys=21.96%, ctx=34, majf=0, minf=2076 00:31:53.029 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:31:53.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:53.029 issued rwts: total=12494,6460,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.029 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:53.029 00:31:53.029 Run status group 0 (all jobs): 00:31:53.029 READ: bw=96.7MiB/s (101MB/s), 96.7MiB/s-96.7MiB/s (101MB/s-101MB/s), io=195MiB (205MB), run=2018-2018msec 00:31:53.029 WRITE: bw=54.7MiB/s (57.4MB/s), 54.7MiB/s-54.7MiB/s (57.4MB/s-57.4MB/s), io=101MiB (106MB), run=1845-1845msec 00:31:53.029 ----------------------------------------------------- 00:31:53.029 Suppressions used: 00:31:53.029 count bytes template 00:31:53.029 1 57 /usr/src/fio/parse.c 00:31:53.029 141 13536 /usr/src/fio/iolog.c 00:31:53.029 1 8 libtcmalloc_minimal.so 00:31:53.029 ----------------------------------------------------- 00:31:53.029 00:31:53.029 14:34:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:53.287 14:34:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:53.287 14:34:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:53.287 14:34:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:53.287 14:34:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:31:53.287 14:34:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:31:53.287 14:34:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:53.287 14:34:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:53.287 14:34:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:31:53.287 14:34:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:31:53.287 14:34:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:31:53.287 14:34:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:31:56.562 Nvme0n1 00:31:56.562 14:34:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:59.860 14:34:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=88eba966-b2ab-4140-8fb5-87aba131aba5 00:31:59.860 14:34:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 88eba966-b2ab-4140-8fb5-87aba131aba5 00:31:59.860 14:34:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=88eba966-b2ab-4140-8fb5-87aba131aba5 00:31:59.860 14:34:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:59.860 14:34:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:59.860 14:34:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:59.860 14:34:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:59.860 14:34:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:59.860 { 00:31:59.860 "uuid": "88eba966-b2ab-4140-8fb5-87aba131aba5", 00:31:59.860 "name": "lvs_0", 00:31:59.860 "base_bdev": "Nvme0n1", 00:31:59.860 "total_data_clusters": 930, 00:31:59.860 "free_clusters": 930, 00:31:59.860 "block_size": 512, 00:31:59.860 "cluster_size": 1073741824 00:31:59.860 } 00:31:59.860 ]' 00:31:59.860 14:34:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="88eba966-b2ab-4140-8fb5-87aba131aba5") .free_clusters' 00:31:59.860 14:34:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:31:59.860 14:34:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="88eba966-b2ab-4140-8fb5-87aba131aba5") .cluster_size' 00:31:59.860 14:34:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:31:59.860 14:34:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:31:59.860 14:34:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:31:59.860 952320 00:31:59.860 14:34:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:00.117 6f8da915-c136-4224-ba4b-702cb60415b0 00:32:00.117 14:34:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:00.375 14:34:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:00.633 14:34:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:00.891 14:34:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:00.891 14:34:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:00.891 14:34:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:00.891 14:34:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:00.891 14:34:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:00.891 14:34:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:00.891 14:34:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:00.891 14:34:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:00.891 14:34:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:00.891 14:34:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:00.891 14:34:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:00.891 14:34:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:00.891 14:34:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:00.891 14:34:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:00.891 14:34:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:00.891 14:34:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:00.891 14:34:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:01.148 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:01.148 fio-3.35 00:32:01.148 Starting 1 thread 00:32:01.148 EAL: No free 2048 kB hugepages reported on node 1 00:32:03.687 00:32:03.687 test: (groupid=0, jobs=1): err= 0: pid=1503995: Wed Jul 10 14:34:12 2024 00:32:03.687 read: IOPS=3991, BW=15.6MiB/s (16.3MB/s)(31.4MiB/2012msec) 00:32:03.687 slat (usec): min=2, max=189, avg= 3.75, stdev= 3.26 00:32:03.687 clat (usec): min=1314, max=174250, avg=17556.76, stdev=13667.10 00:32:03.687 lat (usec): min=1319, max=174308, avg=17560.51, stdev=13667.70 00:32:03.687 clat percentiles (msec): 00:32:03.687 | 1.00th=[ 13], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 16], 00:32:03.687 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 17], 00:32:03.687 | 70.00th=[ 18], 80.00th=[ 18], 90.00th=[ 19], 95.00th=[ 20], 00:32:03.687 | 99.00th=[ 24], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 176], 00:32:03.687 | 99.99th=[ 176] 00:32:03.687 bw ( KiB/s): min=11480, max=17696, per=99.82%, avg=15938.00, stdev=2979.08, samples=4 00:32:03.687 iops : min= 2870, max= 4424, avg=3984.50, stdev=744.77, samples=4 00:32:03.687 write: IOPS=4015, BW=15.7MiB/s (16.4MB/s)(31.6MiB/2012msec); 0 zone resets 00:32:03.687 slat (usec): min=3, max=212, avg= 4.00, stdev= 2.96 00:32:03.687 clat (usec): min=512, max=170995, avg=14197.91, stdev=12866.10 00:32:03.687 lat (usec): min=517, max=171005, avg=14201.92, stdev=12866.83 00:32:03.687 clat percentiles (msec): 00:32:03.687 | 1.00th=[ 9], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 13], 00:32:03.687 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 14], 00:32:03.687 | 70.00th=[ 14], 80.00th=[ 15], 90.00th=[ 15], 95.00th=[ 16], 00:32:03.687 | 99.00th=[ 20], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:32:03.687 | 99.99th=[ 171] 00:32:03.687 bw ( KiB/s): min=12136, max=17536, per=99.79%, avg=16028.00, stdev=2598.94, samples=4 00:32:03.687 iops : min= 3034, max= 4384, avg=4007.00, stdev=649.74, samples=4 00:32:03.687 lat (usec) : 750=0.01%, 1000=0.02% 00:32:03.687 lat (msec) : 2=0.01%, 4=0.06%, 10=0.89%, 20=97.83%, 50=0.38% 00:32:03.687 lat (msec) : 250=0.79% 00:32:03.687 cpu : usr=59.03%, sys=37.44%, ctx=67, majf=0, minf=1533 00:32:03.687 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:03.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:03.687 issued rwts: total=8031,8079,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:03.687 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:03.687 00:32:03.687 Run status group 0 (all jobs): 00:32:03.687 READ: bw=15.6MiB/s (16.3MB/s), 15.6MiB/s-15.6MiB/s (16.3MB/s-16.3MB/s), io=31.4MiB (32.9MB), run=2012-2012msec 00:32:03.687 WRITE: bw=15.7MiB/s (16.4MB/s), 15.7MiB/s-15.7MiB/s (16.4MB/s-16.4MB/s), io=31.6MiB (33.1MB), run=2012-2012msec 00:32:03.687 ----------------------------------------------------- 00:32:03.687 Suppressions used: 00:32:03.687 count bytes template 00:32:03.687 1 58 /usr/src/fio/parse.c 00:32:03.687 1 8 libtcmalloc_minimal.so 00:32:03.687 ----------------------------------------------------- 00:32:03.687 00:32:03.687 14:34:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:03.948 14:34:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:05.319 14:34:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=821d980f-2fef-41e6-865e-59f114d9ca02 00:32:05.319 14:34:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 821d980f-2fef-41e6-865e-59f114d9ca02 00:32:05.319 14:34:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=821d980f-2fef-41e6-865e-59f114d9ca02 00:32:05.319 14:34:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:05.319 14:34:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:32:05.319 14:34:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:32:05.319 14:34:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:05.319 14:34:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:05.319 { 00:32:05.319 "uuid": "88eba966-b2ab-4140-8fb5-87aba131aba5", 00:32:05.319 "name": "lvs_0", 00:32:05.319 "base_bdev": "Nvme0n1", 00:32:05.319 "total_data_clusters": 930, 00:32:05.319 "free_clusters": 0, 00:32:05.319 "block_size": 512, 00:32:05.319 "cluster_size": 1073741824 00:32:05.319 }, 00:32:05.319 { 00:32:05.320 "uuid": "821d980f-2fef-41e6-865e-59f114d9ca02", 00:32:05.320 "name": "lvs_n_0", 00:32:05.320 "base_bdev": "6f8da915-c136-4224-ba4b-702cb60415b0", 00:32:05.320 "total_data_clusters": 237847, 00:32:05.320 "free_clusters": 237847, 00:32:05.320 "block_size": 512, 00:32:05.320 "cluster_size": 4194304 00:32:05.320 } 00:32:05.320 ]' 00:32:05.320 14:34:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="821d980f-2fef-41e6-865e-59f114d9ca02") .free_clusters' 00:32:05.320 14:34:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:32:05.320 14:34:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="821d980f-2fef-41e6-865e-59f114d9ca02") .cluster_size' 00:32:05.320 14:34:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:32:05.320 14:34:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:32:05.320 14:34:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:32:05.320 951388 00:32:05.320 14:34:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:06.692 ea9e8a5b-0921-46b1-976a-7ef3cb4fe53c 00:32:06.692 14:34:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:06.692 14:34:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:06.949 14:34:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:07.207 14:34:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:07.207 14:34:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:07.207 14:34:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:07.207 14:34:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:07.207 14:34:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:07.207 14:34:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:07.207 14:34:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:07.207 14:34:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:07.207 14:34:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:07.207 14:34:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:07.207 14:34:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:07.207 14:34:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:07.207 14:34:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:07.207 14:34:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:07.208 14:34:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:07.208 14:34:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:07.208 14:34:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:07.463 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:07.463 fio-3.35 00:32:07.463 Starting 1 thread 00:32:07.463 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.082 00:32:10.082 test: (groupid=0, jobs=1): err= 0: pid=1504814: Wed Jul 10 14:34:19 2024 00:32:10.082 read: IOPS=4333, BW=16.9MiB/s (17.8MB/s)(34.0MiB/2010msec) 00:32:10.082 slat (usec): min=2, max=250, avg= 3.63, stdev= 3.64 00:32:10.082 clat (usec): min=6120, max=26036, avg=16243.15, stdev=1410.50 00:32:10.082 lat (usec): min=6143, max=26039, avg=16246.78, stdev=1410.35 00:32:10.082 clat percentiles (usec): 00:32:10.082 | 1.00th=[12911], 5.00th=[14091], 10.00th=[14615], 20.00th=[15139], 00:32:10.082 | 30.00th=[15533], 40.00th=[15926], 50.00th=[16188], 60.00th=[16581], 00:32:10.082 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17957], 95.00th=[18482], 00:32:10.082 | 99.00th=[19530], 99.50th=[19792], 99.90th=[22938], 99.95th=[23200], 00:32:10.082 | 99.99th=[26084] 00:32:10.082 bw ( KiB/s): min=16152, max=17816, per=99.68%, avg=17280.00, stdev=762.73, samples=4 00:32:10.082 iops : min= 4038, max= 4454, avg=4320.00, stdev=190.68, samples=4 00:32:10.082 write: IOPS=4331, BW=16.9MiB/s (17.7MB/s)(34.0MiB/2010msec); 0 zone resets 00:32:10.082 slat (usec): min=2, max=166, avg= 3.72, stdev= 2.27 00:32:10.082 clat (usec): min=2992, max=22717, avg=13015.52, stdev=1216.93 00:32:10.082 lat (usec): min=2999, max=22721, avg=13019.24, stdev=1216.90 00:32:10.082 clat percentiles (usec): 00:32:10.082 | 1.00th=[10159], 5.00th=[11207], 10.00th=[11600], 20.00th=[12125], 00:32:10.082 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:32:10.083 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14484], 95.00th=[14877], 00:32:10.083 | 99.00th=[15795], 99.50th=[16450], 99.90th=[18744], 99.95th=[20317], 00:32:10.083 | 99.99th=[22676] 00:32:10.083 bw ( KiB/s): min=17000, max=17584, per=99.85%, avg=17302.00, stdev=255.61, samples=4 00:32:10.083 iops : min= 4250, max= 4396, avg=4325.50, stdev=63.90, samples=4 00:32:10.083 lat (msec) : 4=0.02%, 10=0.45%, 20=99.36%, 50=0.17% 00:32:10.083 cpu : usr=63.66%, sys=33.25%, ctx=89, majf=0, minf=1534 00:32:10.083 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:10.083 issued rwts: total=8711,8707,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:10.083 00:32:10.083 Run status group 0 (all jobs): 00:32:10.083 READ: bw=16.9MiB/s (17.8MB/s), 16.9MiB/s-16.9MiB/s (17.8MB/s-17.8MB/s), io=34.0MiB (35.7MB), run=2010-2010msec 00:32:10.083 WRITE: bw=16.9MiB/s (17.7MB/s), 16.9MiB/s-16.9MiB/s (17.7MB/s-17.7MB/s), io=34.0MiB (35.7MB), run=2010-2010msec 00:32:10.083 ----------------------------------------------------- 00:32:10.083 Suppressions used: 00:32:10.083 count bytes template 00:32:10.083 1 58 /usr/src/fio/parse.c 00:32:10.083 1 8 libtcmalloc_minimal.so 00:32:10.083 ----------------------------------------------------- 00:32:10.083 00:32:10.083 14:34:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:10.340 14:34:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:10.340 14:34:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:15.594 14:34:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:15.594 14:34:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:18.117 14:34:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:18.117 14:34:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:20.028 14:34:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:20.028 14:34:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:20.028 14:34:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:20.028 14:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:20.028 14:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:32:20.028 14:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:20.028 14:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:32:20.028 14:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:20.028 14:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:20.028 rmmod nvme_tcp 00:32:20.028 rmmod nvme_fabrics 00:32:20.028 rmmod nvme_keyring 00:32:20.028 14:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:20.285 14:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:32:20.285 14:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:32:20.285 14:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1501863 ']' 00:32:20.285 14:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1501863 00:32:20.285 14:34:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1501863 ']' 00:32:20.285 14:34:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1501863 00:32:20.285 14:34:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:32:20.285 14:34:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:20.285 14:34:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1501863 00:32:20.285 14:34:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:20.285 14:34:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:20.285 14:34:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1501863' 00:32:20.285 killing process with pid 1501863 00:32:20.285 14:34:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1501863 00:32:20.285 14:34:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1501863 00:32:21.655 14:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:21.655 14:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:21.655 14:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:21.655 14:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:21.655 14:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:21.655 14:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.655 14:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:21.655 14:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:24.184 14:34:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:24.184 00:32:24.184 real 0m41.089s 00:32:24.184 user 2m36.130s 00:32:24.184 sys 0m7.870s 00:32:24.184 14:34:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:24.184 14:34:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.184 ************************************ 00:32:24.184 END TEST nvmf_fio_host 00:32:24.184 ************************************ 00:32:24.184 14:34:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:24.184 14:34:33 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:24.184 14:34:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:24.184 14:34:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:24.184 14:34:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:24.184 ************************************ 00:32:24.184 START TEST nvmf_failover 00:32:24.184 ************************************ 00:32:24.184 14:34:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:24.184 * Looking for test storage... 00:32:24.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:24.184 14:34:33 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:24.184 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:24.184 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:24.184 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:24.184 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:24.184 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:24.184 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:24.184 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:24.184 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:24.184 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:24.184 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:24.184 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:24.184 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:24.184 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:24.184 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:24.184 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:24.184 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:24.184 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:24.184 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:32:24.185 14:34:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:25.558 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:25.558 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:25.558 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:25.558 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:25.558 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:25.559 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:25.559 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:25.559 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:25.559 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:25.559 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:25.559 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:25.559 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:25.559 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:25.559 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:25.559 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:25.559 14:34:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:25.559 14:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:25.559 14:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:25.559 14:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:25.559 14:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:25.815 14:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:25.815 14:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:25.815 14:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:25.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:25.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:32:25.815 00:32:25.815 --- 10.0.0.2 ping statistics --- 00:32:25.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.815 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:32:25.815 14:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:25.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:25.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:32:25.815 00:32:25.815 --- 10.0.0.1 ping statistics --- 00:32:25.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.815 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:32:25.815 14:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:25.815 14:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:32:25.815 14:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:25.815 14:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:25.815 14:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:25.815 14:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:25.815 14:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:25.815 14:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:25.815 14:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:25.815 14:34:35 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:25.815 14:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:25.815 14:34:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:25.816 14:34:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:25.816 14:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1508319 00:32:25.816 14:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:25.816 14:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1508319 00:32:25.816 14:34:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1508319 ']' 00:32:25.816 14:34:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.816 14:34:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:25.816 14:34:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.816 14:34:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:25.816 14:34:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:25.816 [2024-07-10 14:34:35.199009] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:32:25.816 [2024-07-10 14:34:35.199142] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:25.816 EAL: No free 2048 kB hugepages reported on node 1 00:32:26.073 [2024-07-10 14:34:35.330177] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:26.073 [2024-07-10 14:34:35.548045] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:26.073 [2024-07-10 14:34:35.548115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:26.073 [2024-07-10 14:34:35.548141] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:26.073 [2024-07-10 14:34:35.548158] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:26.073 [2024-07-10 14:34:35.548175] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:26.073 [2024-07-10 14:34:35.548310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:26.073 [2024-07-10 14:34:35.548349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.073 [2024-07-10 14:34:35.548359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:26.637 14:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:26.637 14:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:32:26.637 14:34:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:26.637 14:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:26.637 14:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:26.894 14:34:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:26.894 14:34:36 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:27.151 [2024-07-10 14:34:36.407332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.151 14:34:36 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:27.409 Malloc0 00:32:27.409 14:34:36 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:27.666 14:34:37 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:27.923 14:34:37 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:28.181 [2024-07-10 14:34:37.468973] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:28.181 14:34:37 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:28.437 [2024-07-10 14:34:37.713656] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:28.437 14:34:37 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:28.694 [2024-07-10 14:34:37.962491] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:28.694 14:34:37 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1508725 00:32:28.694 14:34:37 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:28.694 14:34:37 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:28.694 14:34:37 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1508725 /var/tmp/bdevperf.sock 00:32:28.694 14:34:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1508725 ']' 00:32:28.694 14:34:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:28.694 14:34:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:28.694 14:34:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:28.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:28.694 14:34:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:28.694 14:34:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:29.627 14:34:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:29.627 14:34:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:32:29.627 14:34:38 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:29.884 NVMe0n1 00:32:29.884 14:34:39 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:30.449 00:32:30.449 14:34:39 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1508875 00:32:30.449 14:34:39 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:30.449 14:34:39 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:31.382 14:34:40 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:31.639 14:34:40 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:34.915 14:34:43 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:34.915 00:32:34.915 14:34:44 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:35.172 14:34:44 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:38.448 14:34:47 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:38.448 [2024-07-10 14:34:47.752539] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:38.448 14:34:47 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:39.380 14:34:48 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:39.638 [2024-07-10 14:34:49.033526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.033609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.033632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.033651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.033669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.033686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.033703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.033732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.033749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.033767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.033801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.033819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.033837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.033872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.033900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.033931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.033949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.033967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.033985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 [2024-07-10 14:34:49.034410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:39.638 14:34:49 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1508875 00:32:46.187 0 00:32:46.187 14:34:54 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1508725 00:32:46.187 14:34:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1508725 ']' 00:32:46.187 14:34:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1508725 00:32:46.187 14:34:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:32:46.187 14:34:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:46.187 14:34:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1508725 00:32:46.187 14:34:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:46.187 14:34:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:46.187 14:34:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1508725' 00:32:46.187 killing process with pid 1508725 00:32:46.187 14:34:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1508725 00:32:46.187 14:34:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1508725 00:32:46.455 14:34:55 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:46.455 [2024-07-10 14:34:38.060009] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:32:46.455 [2024-07-10 14:34:38.060178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508725 ] 00:32:46.455 EAL: No free 2048 kB hugepages reported on node 1 00:32:46.455 [2024-07-10 14:34:38.188647] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.455 [2024-07-10 14:34:38.422703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.455 Running I/O for 15 seconds... 00:32:46.455 [2024-07-10 14:34:40.867191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-10 14:34:40.867268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.455 [2024-07-10 14:34:40.867331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:54400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-10 14:34:40.867356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.455 [2024-07-10 14:34:40.867381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-10 14:34:40.867402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.455 [2024-07-10 14:34:40.867451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:54416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-10 14:34:40.867489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.455 [2024-07-10 14:34:40.867515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:54424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-10 14:34:40.867537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.455 [2024-07-10 14:34:40.867560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-10 14:34:40.867582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.455 [2024-07-10 14:34:40.867605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-10 14:34:40.867627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.455 [2024-07-10 14:34:40.867649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-10 14:34:40.867670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.455 [2024-07-10 14:34:40.867693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:54456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-10 14:34:40.867713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.455 [2024-07-10 14:34:40.867751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-10 14:34:40.867772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.455 [2024-07-10 14:34:40.867809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:54472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-10 14:34:40.867831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.455 [2024-07-10 14:34:40.867865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:54480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-10 14:34:40.867886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.455 [2024-07-10 14:34:40.867908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:54488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-10 14:34:40.867928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.455 [2024-07-10 14:34:40.867949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-10 14:34:40.867969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.455 [2024-07-10 14:34:40.867991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-10 14:34:40.868039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.455 [2024-07-10 14:34:40.868064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.455 [2024-07-10 14:34:40.868085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.868107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.456 [2024-07-10 14:34:40.868127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.868151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:54528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.456 [2024-07-10 14:34:40.868171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.868193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.456 [2024-07-10 14:34:40.868213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.868236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.456 [2024-07-10 14:34:40.868256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.868279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.456 [2024-07-10 14:34:40.868300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.868322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:54560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.456 [2024-07-10 14:34:40.868343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.868365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:54568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.456 [2024-07-10 14:34:40.868386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.868432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:54576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.456 [2024-07-10 14:34:40.868460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.868486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.456 [2024-07-10 14:34:40.868509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.868532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.456 [2024-07-10 14:34:40.868554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.868577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.456 [2024-07-10 14:34:40.868597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.868621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.456 [2024-07-10 14:34:40.868642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.868665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:54616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.456 [2024-07-10 14:34:40.868687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.868710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:54624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.456 [2024-07-10 14:34:40.868731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.868771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.456 [2024-07-10 14:34:40.868792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.868814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.456 [2024-07-10 14:34:40.868834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.868857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.868878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.868900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.868920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.868943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.868963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.868986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.869034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.869078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.869122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.869165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.869209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.869262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.869305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.869348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.869392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.869458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.869506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.869552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.869607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.869657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.869700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.869760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.869803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.869845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.869887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.869930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.869974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.869995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.870017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.870037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.456 [2024-07-10 14:34:40.870059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.456 [2024-07-10 14:34:40.870080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.870102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.457 [2024-07-10 14:34:40.870122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.870144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.457 [2024-07-10 14:34:40.870164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.870187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.457 [2024-07-10 14:34:40.870211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.870233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.457 [2024-07-10 14:34:40.870254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.870276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.457 [2024-07-10 14:34:40.870297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.870319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.457 [2024-07-10 14:34:40.870341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.870363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.457 [2024-07-10 14:34:40.870383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.870406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.457 [2024-07-10 14:34:40.870449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.870477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.457 [2024-07-10 14:34:40.870499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.870522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.457 [2024-07-10 14:34:40.870543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.870565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.457 [2024-07-10 14:34:40.870586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.870608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.457 [2024-07-10 14:34:40.870629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.870654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.457 [2024-07-10 14:34:40.870675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.870698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.457 [2024-07-10 14:34:40.870719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.870758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.457 [2024-07-10 14:34:40.870778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.870805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.457 [2024-07-10 14:34:40.870826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.870848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.457 [2024-07-10 14:34:40.870868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.870891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.457 [2024-07-10 14:34:40.870911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.870932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.457 [2024-07-10 14:34:40.870953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.870975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.457 [2024-07-10 14:34:40.871007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.871031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.457 [2024-07-10 14:34:40.871052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.871102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.457 [2024-07-10 14:34:40.871129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55048 len:8 PRP1 0x0 PRP2 0x0 00:32:46.457 [2024-07-10 14:34:40.871150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.871176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.457 [2024-07-10 14:34:40.871196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.457 [2024-07-10 14:34:40.871222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55056 len:8 PRP1 0x0 PRP2 0x0 00:32:46.457 [2024-07-10 14:34:40.871241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.871261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.457 [2024-07-10 14:34:40.871277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.457 [2024-07-10 14:34:40.871294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55064 len:8 PRP1 0x0 PRP2 0x0 00:32:46.457 [2024-07-10 14:34:40.871312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.871330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.457 [2024-07-10 14:34:40.871345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.457 [2024-07-10 14:34:40.871362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55072 len:8 PRP1 0x0 PRP2 0x0 00:32:46.457 [2024-07-10 14:34:40.871380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.871398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.457 [2024-07-10 14:34:40.871417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.457 [2024-07-10 14:34:40.871460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55080 len:8 PRP1 0x0 PRP2 0x0 00:32:46.457 [2024-07-10 14:34:40.871480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.871500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.457 [2024-07-10 14:34:40.871518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.457 [2024-07-10 14:34:40.871535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55088 len:8 PRP1 0x0 PRP2 0x0 00:32:46.457 [2024-07-10 14:34:40.871554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.871572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.457 [2024-07-10 14:34:40.871589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.457 [2024-07-10 14:34:40.871606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55096 len:8 PRP1 0x0 PRP2 0x0 00:32:46.457 [2024-07-10 14:34:40.871625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.871643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.457 [2024-07-10 14:34:40.871659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.457 [2024-07-10 14:34:40.871677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55104 len:8 PRP1 0x0 PRP2 0x0 00:32:46.457 [2024-07-10 14:34:40.871695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.871719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.457 [2024-07-10 14:34:40.871750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.457 [2024-07-10 14:34:40.871767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55112 len:8 PRP1 0x0 PRP2 0x0 00:32:46.457 [2024-07-10 14:34:40.871785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.871805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.457 [2024-07-10 14:34:40.871821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.457 [2024-07-10 14:34:40.871837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55120 len:8 PRP1 0x0 PRP2 0x0 00:32:46.457 [2024-07-10 14:34:40.871855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.871873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.457 [2024-07-10 14:34:40.871889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.457 [2024-07-10 14:34:40.871906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55128 len:8 PRP1 0x0 PRP2 0x0 00:32:46.457 [2024-07-10 14:34:40.871924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.871942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.457 [2024-07-10 14:34:40.871958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.457 [2024-07-10 14:34:40.871975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55136 len:8 PRP1 0x0 PRP2 0x0 00:32:46.457 [2024-07-10 14:34:40.871993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.872014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.457 [2024-07-10 14:34:40.872031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.457 [2024-07-10 14:34:40.872047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55144 len:8 PRP1 0x0 PRP2 0x0 00:32:46.457 [2024-07-10 14:34:40.872065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.457 [2024-07-10 14:34:40.872083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.872099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.872115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55152 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.872133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.872151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.872167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.872183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55160 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.872201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.872219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.872235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.872251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55168 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.872270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.872289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.872305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.872322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55176 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.872339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.872358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.872373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.872389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55184 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.872429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.872452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.872469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.872486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55192 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.872505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.872524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.872540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.872557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55200 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.872580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.872599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.872615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.872632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55208 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.872652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.872671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.872687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.872704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55216 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.872723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.872758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.872774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.872791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55224 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.872809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.872827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.872842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.872860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55232 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.872878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.872896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.872911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.872928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55240 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.872946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.872964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.872979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.872995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55248 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.873013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.873032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.873047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.873064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55256 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.873082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.873100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.873115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.873136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55264 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.873155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.873173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.873188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.873204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55272 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.873222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.873240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.873256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.873273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55280 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.873290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.873308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.873324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.873340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55288 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.873358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.873376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.873392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.873421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55296 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.873464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.873485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.873502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.873520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55304 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.873540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.873560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.873577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.873594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55312 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.873612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.873631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.873647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.873665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55320 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.873683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.873706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.873723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.873755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55328 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.873775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.873794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.873810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.873827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55336 len:8 PRP1 0x0 PRP2 0x0 00:32:46.458 [2024-07-10 14:34:40.873845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.458 [2024-07-10 14:34:40.873863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.458 [2024-07-10 14:34:40.873879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.458 [2024-07-10 14:34:40.873895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55344 len:8 PRP1 0x0 PRP2 0x0 00:32:46.459 [2024-07-10 14:34:40.873914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:40.873932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.459 [2024-07-10 14:34:40.873947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.459 [2024-07-10 14:34:40.873964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55352 len:8 PRP1 0x0 PRP2 0x0 00:32:46.459 [2024-07-10 14:34:40.873982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:40.874000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.459 [2024-07-10 14:34:40.874016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.459 [2024-07-10 14:34:40.874032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55360 len:8 PRP1 0x0 PRP2 0x0 00:32:46.459 [2024-07-10 14:34:40.874050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:40.874069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.459 [2024-07-10 14:34:40.874085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.459 [2024-07-10 14:34:40.874102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55368 len:8 PRP1 0x0 PRP2 0x0 00:32:46.459 [2024-07-10 14:34:40.874120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:40.874137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.459 [2024-07-10 14:34:40.874153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.459 [2024-07-10 14:34:40.874169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55376 len:8 PRP1 0x0 PRP2 0x0 00:32:46.459 [2024-07-10 14:34:40.874188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:40.874206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.459 [2024-07-10 14:34:40.874222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.459 [2024-07-10 14:34:40.874239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55384 len:8 PRP1 0x0 PRP2 0x0 00:32:46.459 [2024-07-10 14:34:40.874262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:40.874281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.459 [2024-07-10 14:34:40.874297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.459 [2024-07-10 14:34:40.874314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55392 len:8 PRP1 0x0 PRP2 0x0 00:32:46.459 [2024-07-10 14:34:40.874332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:40.874350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.459 [2024-07-10 14:34:40.874366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.459 [2024-07-10 14:34:40.874382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55400 len:8 PRP1 0x0 PRP2 0x0 00:32:46.459 [2024-07-10 14:34:40.874415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:40.874443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.459 [2024-07-10 14:34:40.874461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.459 [2024-07-10 14:34:40.874479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55408 len:8 PRP1 0x0 PRP2 0x0 00:32:46.459 [2024-07-10 14:34:40.874498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:40.874517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.459 [2024-07-10 14:34:40.874533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.459 [2024-07-10 14:34:40.874551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54648 len:8 PRP1 0x0 PRP2 0x0 00:32:46.459 [2024-07-10 14:34:40.874569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:40.874589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.459 [2024-07-10 14:34:40.874605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.459 [2024-07-10 14:34:40.874622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54656 len:8 PRP1 0x0 PRP2 0x0 00:32:46.459 [2024-07-10 14:34:40.874641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:40.874936] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2f00 was disconnected and freed. reset controller. 00:32:46.459 [2024-07-10 14:34:40.874966] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:46.459 [2024-07-10 14:34:40.875032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.459 [2024-07-10 14:34:40.875059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:40.875083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.459 [2024-07-10 14:34:40.875104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:40.875125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.459 [2024-07-10 14:34:40.875144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:40.875169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.459 [2024-07-10 14:34:40.875189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:40.875208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.459 [2024-07-10 14:34:40.875300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:46.459 [2024-07-10 14:34:40.879107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.459 [2024-07-10 14:34:40.963748] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:46.459 [2024-07-10 14:34:44.506257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.459 [2024-07-10 14:34:44.506363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:44.506391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.459 [2024-07-10 14:34:44.506412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:44.506441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.459 [2024-07-10 14:34:44.506464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:44.506485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.459 [2024-07-10 14:34:44.506506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:44.506526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:46.459 [2024-07-10 14:34:44.507596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.459 [2024-07-10 14:34:44.507630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:44.507671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.459 [2024-07-10 14:34:44.507694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:44.507719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.459 [2024-07-10 14:34:44.507741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:44.507779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.459 [2024-07-10 14:34:44.507801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:44.507824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.459 [2024-07-10 14:34:44.507860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.459 [2024-07-10 14:34:44.507883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.459 [2024-07-10 14:34:44.507913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.507937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.507957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.507979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.507999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.508042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.508084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.508125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.508167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.508208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.508249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.508291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.508332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.508389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.508461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.508510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.508553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.508596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.460 [2024-07-10 14:34:44.508639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.508682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.508741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.508784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.508827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.508869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.508912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.508956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.508978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.508999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.509021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.509042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.509071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.509093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.509115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.509135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.509157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.509178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.509201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.509221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.509243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.509263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.509286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.509308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.509330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.509350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.509372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.509392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.509444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.509469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.509493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.509513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.509535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.509556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.509578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.509598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.509620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.509645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.509668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.509689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.509711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.509732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.509770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.509790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.509811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.509831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.460 [2024-07-10 14:34:44.509853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.460 [2024-07-10 14:34:44.509872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.509895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.509914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.509936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.509955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.509976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.509995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.510036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.510077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.510118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.510160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.510206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.510265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.510308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.510349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.510391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.510457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.510505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.510549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.510593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.510636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.510681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.510725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.510789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.510833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.510876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.510918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.510961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.510983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.511004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.511026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.511047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.511069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.511090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.511113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.511134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.511156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.511176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.511199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.511219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.511241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.511261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.511284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.511316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.511344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.511366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.511388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.511408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.511452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.511476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.511502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.511523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.511546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.511567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.511590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.511612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.511634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.511655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.511678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.511699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.511722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.511759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.511782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.511803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.461 [2024-07-10 14:34:44.511826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.461 [2024-07-10 14:34:44.511846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.511868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.511889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.511911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.511936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.511958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.511979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.512021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.512063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.512106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.512148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.512191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.512235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.512279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.462 [2024-07-10 14:34:44.512322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.462 [2024-07-10 14:34:44.512365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.512408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.512477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.512528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.512573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.512618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.512662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.512707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.512766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.512810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.512853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.512895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.512939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.512961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.512982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.513004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.513025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.513049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.513070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.513096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.513117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.513139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.513161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.513183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.513203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.513224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.513244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.513266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.513286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.513308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.513329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.513351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.513371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.513394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:44.513414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.513459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3180 is same with the state(5) to be set 00:32:46.462 [2024-07-10 14:34:44.513493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.462 [2024-07-10 14:34:44.513512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.462 [2024-07-10 14:34:44.513532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121760 len:8 PRP1 0x0 PRP2 0x0 00:32:46.462 [2024-07-10 14:34:44.513551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:44.513842] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f3180 was disconnected and freed. reset controller. 00:32:46.462 [2024-07-10 14:34:44.513871] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:46.462 [2024-07-10 14:34:44.513892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.462 [2024-07-10 14:34:44.517811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.462 [2024-07-10 14:34:44.517878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:46.462 [2024-07-10 14:34:44.643086] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:46.462 [2024-07-10 14:34:49.036243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.462 [2024-07-10 14:34:49.036310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:49.036364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.462 [2024-07-10 14:34:49.036388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.462 [2024-07-10 14:34:49.036438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.462 [2024-07-10 14:34:49.036465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.036490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.036513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.036537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.036560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.036583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.036604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.036627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.036648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.036672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.036693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.036730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.036752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.036775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.036803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.036825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.036845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.036867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.036887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.036932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.036959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.036982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.037003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.037035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.037055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.037078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.037106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.037129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.037150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.037171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.037193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.037215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.037236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.037258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.037279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.037302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.037323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.037345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.037365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.037388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.037439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.037467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.037490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.037513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.037535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.037563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.037586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.037609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.037630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.037654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.037675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.037698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.037736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.037760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.037781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.037803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.037825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.037858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.037878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.037901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.037921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.037943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.037964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.037987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.038007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.038030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.038051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.038074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.038095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.038117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.038142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.038165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.038187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.463 [2024-07-10 14:34:49.038209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.463 [2024-07-10 14:34:49.038230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.038252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.038273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.038296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.038317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.038339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.038361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.038383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.038404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.038454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.038477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.038500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.038521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.038544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.038565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.038588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.038610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.038632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.038653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.038676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.038697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.038724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.038761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.038784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.038805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.038827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.038848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.038877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.038898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.038920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.038941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.038963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.038984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.039006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.039027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.039050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.039070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.039094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.039114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.039136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.039157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.039179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.039199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.039221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.039242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.039265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.039288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.039311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.039331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.039353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.039373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.039420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.039455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.039489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.039510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.039534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.039556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.039579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.039600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.039622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.039643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.039665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.039686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.039709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.039730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.039769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.039796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.039818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.039838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.039861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.039881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.039903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.039927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.039965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.039987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.040009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.464 [2024-07-10 14:34:49.040029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.040052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.464 [2024-07-10 14:34:49.040073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.040095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.464 [2024-07-10 14:34:49.040125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.040148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.464 [2024-07-10 14:34:49.040169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.040203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.464 [2024-07-10 14:34:49.040224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.464 [2024-07-10 14:34:49.040246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.465 [2024-07-10 14:34:49.040266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.040288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.465 [2024-07-10 14:34:49.040308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.040331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.465 [2024-07-10 14:34:49.040351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.040373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.465 [2024-07-10 14:34:49.040393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.040450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.465 [2024-07-10 14:34:49.040474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.040498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.465 [2024-07-10 14:34:49.040519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.040546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.465 [2024-07-10 14:34:49.040569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.040593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.465 [2024-07-10 14:34:49.040614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.040636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.465 [2024-07-10 14:34:49.040657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.040681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.465 [2024-07-10 14:34:49.040701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.040750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.465 [2024-07-10 14:34:49.040772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.040794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.465 [2024-07-10 14:34:49.040815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.040836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.465 [2024-07-10 14:34:49.040857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.040879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.465 [2024-07-10 14:34:49.040900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.040932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.465 [2024-07-10 14:34:49.040952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.040974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.465 [2024-07-10 14:34:49.040994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.041016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.465 [2024-07-10 14:34:49.041036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.041059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.465 [2024-07-10 14:34:49.041079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.041123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.465 [2024-07-10 14:34:49.041152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113560 len:8 PRP1 0x0 PRP2 0x0 00:32:46.465 [2024-07-10 14:34:49.041174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.041277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.465 [2024-07-10 14:34:49.041307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.041331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.465 [2024-07-10 14:34:49.041351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.041371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.465 [2024-07-10 14:34:49.041391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.041411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.465 [2024-07-10 14:34:49.041448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.041469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:46.465 [2024-07-10 14:34:49.041750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.465 [2024-07-10 14:34:49.041776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.465 [2024-07-10 14:34:49.041794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113568 len:8 PRP1 0x0 PRP2 0x0 00:32:46.465 [2024-07-10 14:34:49.041814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.041839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.465 [2024-07-10 14:34:49.041856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.465 [2024-07-10 14:34:49.041874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113576 len:8 PRP1 0x0 PRP2 0x0 00:32:46.465 [2024-07-10 14:34:49.041893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.041911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.465 [2024-07-10 14:34:49.041927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.465 [2024-07-10 14:34:49.041944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113584 len:8 PRP1 0x0 PRP2 0x0 00:32:46.465 [2024-07-10 14:34:49.041963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.041981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.465 [2024-07-10 14:34:49.041997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.465 [2024-07-10 14:34:49.042013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113592 len:8 PRP1 0x0 PRP2 0x0 00:32:46.465 [2024-07-10 14:34:49.042031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.042050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.465 [2024-07-10 14:34:49.042066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.465 [2024-07-10 14:34:49.042087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113600 len:8 PRP1 0x0 PRP2 0x0 00:32:46.465 [2024-07-10 14:34:49.042105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.042124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.465 [2024-07-10 14:34:49.042140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.465 [2024-07-10 14:34:49.042157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113608 len:8 PRP1 0x0 PRP2 0x0 00:32:46.465 [2024-07-10 14:34:49.042175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.042194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.465 [2024-07-10 14:34:49.042209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.465 [2024-07-10 14:34:49.042226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112656 len:8 PRP1 0x0 PRP2 0x0 00:32:46.465 [2024-07-10 14:34:49.042245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.042263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.465 [2024-07-10 14:34:49.042279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.465 [2024-07-10 14:34:49.042295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112664 len:8 PRP1 0x0 PRP2 0x0 00:32:46.465 [2024-07-10 14:34:49.042313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.042331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.465 [2024-07-10 14:34:49.042347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.465 [2024-07-10 14:34:49.042364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112672 len:8 PRP1 0x0 PRP2 0x0 00:32:46.465 [2024-07-10 14:34:49.042381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.042400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.465 [2024-07-10 14:34:49.042453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.465 [2024-07-10 14:34:49.042472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112680 len:8 PRP1 0x0 PRP2 0x0 00:32:46.465 [2024-07-10 14:34:49.042491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.042510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.465 [2024-07-10 14:34:49.042527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.465 [2024-07-10 14:34:49.042544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112688 len:8 PRP1 0x0 PRP2 0x0 00:32:46.465 [2024-07-10 14:34:49.042563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.465 [2024-07-10 14:34:49.042582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.465 [2024-07-10 14:34:49.042598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.465 [2024-07-10 14:34:49.042616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112696 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.042634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.042653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.042674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.042692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112704 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.042711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.042738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.042769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.042798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112712 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.042817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.042836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.042862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.042879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112720 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.042897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.042915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.042931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.042947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112728 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.042966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.042985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.043001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.043018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112736 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.043036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.043098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.043116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.043133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112744 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.043152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.043170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.043187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.043204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112752 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.043222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.043241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.043263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.043280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112760 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.043298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.043331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.043348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.043365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112768 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.043384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.043402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.043451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.043470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112776 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.043490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.043509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.043526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.043543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112784 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.043562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.043581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.043597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.043615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112792 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.043634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.043653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.043669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.043686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112800 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.043704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.043724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.043766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.043783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112808 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.043801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.043819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.043836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.043852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112816 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.043871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.043888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.043914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.043931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112592 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.043953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.043972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.043988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.044004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112824 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.044025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.044044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.044061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.044077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112832 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.044106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.044125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.044141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.044168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112840 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.044186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.044205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.044233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.044250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112848 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.044268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.044286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.044302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.044319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112856 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.044337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.044356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.044372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.044390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112864 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.044440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.044463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.044480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.044498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112872 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.044517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.044537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.044557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.044575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112880 len:8 PRP1 0x0 PRP2 0x0 00:32:46.466 [2024-07-10 14:34:49.044595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.466 [2024-07-10 14:34:49.044614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.466 [2024-07-10 14:34:49.044631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.466 [2024-07-10 14:34:49.044648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112888 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.044668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.044687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.044718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.044745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112896 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.044763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.044782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.044799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.044816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112904 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.044834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.044853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.044869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.044886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112912 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.044904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.044922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.044938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.044955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112920 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.044973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.044992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.045009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.045026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112928 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.045055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.045074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.045090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.045107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112936 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.045130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.045152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.045169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.045187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112944 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.045205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.045224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.045240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.045257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112952 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.045276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.045294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.045310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.045327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112960 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.045346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.045364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.045380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.045397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112968 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.045453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.045476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.045494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.045512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112976 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.045530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.045549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.045566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.045584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112984 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.045603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.045636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.045653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.045671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112992 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.045691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.045710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.045726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.045769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113000 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.045791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.045822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.045838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.045854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113008 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.045884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.045902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.045919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.045936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113016 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.045955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.045973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.045989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.046006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113024 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.046024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.046042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.046058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.046075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113032 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.046092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.046111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.046127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.046143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113040 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.046161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.046179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.046195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.046212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113048 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.046230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.046249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.046265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.046282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113056 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.046301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.046320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.046335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.046355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113064 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.046374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.046393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.046440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.046460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113072 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.046480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.046499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.046516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.046533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113080 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.046552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.467 [2024-07-10 14:34:49.046571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.467 [2024-07-10 14:34:49.046587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.467 [2024-07-10 14:34:49.046605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113088 len:8 PRP1 0x0 PRP2 0x0 00:32:46.467 [2024-07-10 14:34:49.046623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.046642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.046659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.468 [2024-07-10 14:34:49.046676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113096 len:8 PRP1 0x0 PRP2 0x0 00:32:46.468 [2024-07-10 14:34:49.046695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.046714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.046741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.468 [2024-07-10 14:34:49.046773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113104 len:8 PRP1 0x0 PRP2 0x0 00:32:46.468 [2024-07-10 14:34:49.046791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.046822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.046838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.468 [2024-07-10 14:34:49.046855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113112 len:8 PRP1 0x0 PRP2 0x0 00:32:46.468 [2024-07-10 14:34:49.046873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.046899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.046915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.468 [2024-07-10 14:34:49.046932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113120 len:8 PRP1 0x0 PRP2 0x0 00:32:46.468 [2024-07-10 14:34:49.046950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.046980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.046997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.468 [2024-07-10 14:34:49.047014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113128 len:8 PRP1 0x0 PRP2 0x0 00:32:46.468 [2024-07-10 14:34:49.047032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.047051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.047077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.468 [2024-07-10 14:34:49.047094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113136 len:8 PRP1 0x0 PRP2 0x0 00:32:46.468 [2024-07-10 14:34:49.047112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.047130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.047146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.468 [2024-07-10 14:34:49.047163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113144 len:8 PRP1 0x0 PRP2 0x0 00:32:46.468 [2024-07-10 14:34:49.047181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.047199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.047216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.468 [2024-07-10 14:34:49.047234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113152 len:8 PRP1 0x0 PRP2 0x0 00:32:46.468 [2024-07-10 14:34:49.047252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.047282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.047298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.468 [2024-07-10 14:34:49.047314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113160 len:8 PRP1 0x0 PRP2 0x0 00:32:46.468 [2024-07-10 14:34:49.047333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.047359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.047375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.468 [2024-07-10 14:34:49.047392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113168 len:8 PRP1 0x0 PRP2 0x0 00:32:46.468 [2024-07-10 14:34:49.047439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.047461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.047478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.468 [2024-07-10 14:34:49.047496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113176 len:8 PRP1 0x0 PRP2 0x0 00:32:46.468 [2024-07-10 14:34:49.047515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.047534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.047551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.468 [2024-07-10 14:34:49.047568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113184 len:8 PRP1 0x0 PRP2 0x0 00:32:46.468 [2024-07-10 14:34:49.047591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.047610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.047626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.468 [2024-07-10 14:34:49.047644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113192 len:8 PRP1 0x0 PRP2 0x0 00:32:46.468 [2024-07-10 14:34:49.047663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.047682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.047698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.468 [2024-07-10 14:34:49.047743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113200 len:8 PRP1 0x0 PRP2 0x0 00:32:46.468 [2024-07-10 14:34:49.047761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.047780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.047796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.468 [2024-07-10 14:34:49.047812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113208 len:8 PRP1 0x0 PRP2 0x0 00:32:46.468 [2024-07-10 14:34:49.047830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.047858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.047874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.468 [2024-07-10 14:34:49.047890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113216 len:8 PRP1 0x0 PRP2 0x0 00:32:46.468 [2024-07-10 14:34:49.047908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.047926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.047942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.468 [2024-07-10 14:34:49.047959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113224 len:8 PRP1 0x0 PRP2 0x0 00:32:46.468 [2024-07-10 14:34:49.047977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.047995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.048011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.468 [2024-07-10 14:34:49.048027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113232 len:8 PRP1 0x0 PRP2 0x0 00:32:46.468 [2024-07-10 14:34:49.048046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.048064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.048080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.468 [2024-07-10 14:34:49.048097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113240 len:8 PRP1 0x0 PRP2 0x0 00:32:46.468 [2024-07-10 14:34:49.048115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.048152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.048169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.468 [2024-07-10 14:34:49.048190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113248 len:8 PRP1 0x0 PRP2 0x0 00:32:46.468 [2024-07-10 14:34:49.048209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.048228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.048243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.468 [2024-07-10 14:34:49.048260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113256 len:8 PRP1 0x0 PRP2 0x0 00:32:46.468 [2024-07-10 14:34:49.048278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.468 [2024-07-10 14:34:49.048296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.468 [2024-07-10 14:34:49.048311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.048327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113264 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.048345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.048364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.048379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.048401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113272 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.048452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.048474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.048490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.048508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113280 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.048526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.048544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.048560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.048578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113288 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.048596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.048615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.048631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.048648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113296 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.048666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.048685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.048701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.048718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113304 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.048740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.048780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.048812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.048829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113312 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.048847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.048866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.048881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.048898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113320 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.048915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.048933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.048949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.048965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113328 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.048983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.049001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.049022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.049043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113336 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.049062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.049090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.049106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.049122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113344 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.049140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.049158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.049174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.049190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113352 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.049208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.049226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.049241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.049258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113360 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.049276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.049294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.049309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.049325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113368 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.049343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.049370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.049387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.049438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113376 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.049461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.049481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.049498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.049515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113384 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.049534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.049552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.049568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.049585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113392 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.049610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.049629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.049646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.049668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113400 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.049687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.049706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.049722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.049762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113408 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.049780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.049799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.049815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.049831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113416 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.049849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.049867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.049882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.049899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113424 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.049917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.049935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.049950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.049966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113432 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.049988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.050009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.050025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.050042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112600 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.050060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.050078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.050095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.050111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112608 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.050130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.050147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.050163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.050181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112616 len:8 PRP1 0x0 PRP2 0x0 00:32:46.469 [2024-07-10 14:34:49.050199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.469 [2024-07-10 14:34:49.050216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.469 [2024-07-10 14:34:49.050232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.469 [2024-07-10 14:34:49.050250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112624 len:8 PRP1 0x0 PRP2 0x0 00:32:46.470 [2024-07-10 14:34:49.050270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.470 [2024-07-10 14:34:49.050289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.470 [2024-07-10 14:34:49.050314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.470 [2024-07-10 14:34:49.050331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112632 len:8 PRP1 0x0 PRP2 0x0 00:32:46.470 [2024-07-10 14:34:49.050349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.470 [2024-07-10 14:34:49.050367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.470 [2024-07-10 14:34:49.050383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.470 [2024-07-10 14:34:49.050400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112640 len:8 PRP1 0x0 PRP2 0x0 00:32:46.470 [2024-07-10 14:34:49.050453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.470 [2024-07-10 14:34:49.050474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.470 [2024-07-10 14:34:49.050492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.470 [2024-07-10 14:34:49.050510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112648 len:8 PRP1 0x0 PRP2 0x0 00:32:46.470 [2024-07-10 14:34:49.050530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.470 [2024-07-10 14:34:49.050549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.470 [2024-07-10 14:34:49.050569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.470 [2024-07-10 14:34:49.050587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113440 len:8 PRP1 0x0 PRP2 0x0 00:32:46.470 [2024-07-10 14:34:49.050606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.470 [2024-07-10 14:34:49.050639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.470 [2024-07-10 14:34:49.050656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.470 [2024-07-10 14:34:49.050674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113448 len:8 PRP1 0x0 PRP2 0x0 00:32:46.470 [2024-07-10 14:34:49.050694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.470 [2024-07-10 14:34:49.050713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.470 [2024-07-10 14:34:49.050730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.470 [2024-07-10 14:34:49.050763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113456 len:8 PRP1 0x0 PRP2 0x0 00:32:46.470 [2024-07-10 14:34:49.050781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.470 [2024-07-10 14:34:49.050800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.470 [2024-07-10 14:34:49.050817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.470 [2024-07-10 14:34:49.050838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113464 len:8 PRP1 0x0 PRP2 0x0 00:32:46.470 [2024-07-10 14:34:49.050856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.470 [2024-07-10 14:34:49.050875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.470 [2024-07-10 14:34:49.050891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.470 [2024-07-10 14:34:49.050908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113472 len:8 PRP1 0x0 PRP2 0x0 00:32:46.470 [2024-07-10 14:34:49.050926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.470 [2024-07-10 14:34:49.050945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.470 [2024-07-10 14:34:49.050960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.470 [2024-07-10 14:34:49.050977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113480 len:8 PRP1 0x0 PRP2 0x0 00:32:46.470 [2024-07-10 14:34:49.051000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.470 [2024-07-10 14:34:49.051020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.470 [2024-07-10 14:34:49.051036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.470 [2024-07-10 14:34:49.051052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113488 len:8 PRP1 0x0 PRP2 0x0 00:32:46.470 [2024-07-10 14:34:49.051071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.470 [2024-07-10 14:34:49.051089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.470 [2024-07-10 14:34:49.051105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.470 [2024-07-10 14:34:49.051121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113496 len:8 PRP1 0x0 PRP2 0x0 00:32:46.470 [2024-07-10 14:34:49.051140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.470 [2024-07-10 14:34:49.051161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.470 [2024-07-10 14:34:49.051178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.470 [2024-07-10 14:34:49.051196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113504 len:8 PRP1 0x0 PRP2 0x0 00:32:46.470 [2024-07-10 14:34:49.051214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.470 [2024-07-10 14:34:49.051233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.470 [2024-07-10 14:34:49.051249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.470 [2024-07-10 14:34:49.051270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113512 len:8 PRP1 0x0 PRP2 0x0 00:32:46.470 [2024-07-10 14:34:49.051289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.470 [2024-07-10 14:34:49.051307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.470 [2024-07-10 14:34:49.051323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.470 [2024-07-10 14:34:49.051340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113520 len:8 PRP1 0x0 PRP2 0x0 00:32:46.470 [2024-07-10 14:34:49.051358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.470 [2024-07-10 14:34:49.051376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.470 [2024-07-10 14:34:49.051391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.470 [2024-07-10 14:34:49.051439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113528 len:8 PRP1 0x0 PRP2 0x0 00:32:46.470 [2024-07-10 14:34:49.051460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.470 [2024-07-10 14:34:49.051481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.470 [2024-07-10 14:34:49.051497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.470 [2024-07-10 14:34:49.051514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113536 len:8 PRP1 0x0 PRP2 0x0 00:32:46.470 [2024-07-10 14:34:49.051533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.470 [2024-07-10 14:34:49.051551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.470 [2024-07-10 14:34:49.051568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.470 [2024-07-10 14:34:49.051586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113544 len:8 PRP1 0x0 PRP2 0x0 00:32:46.470 [2024-07-10 14:34:49.051604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.470 [2024-07-10 14:34:49.051622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.470 [2024-07-10 14:34:49.051638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.470 [2024-07-10 14:34:49.051656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113552 len:8 PRP1 0x0 PRP2 0x0 00:32:46.470 [2024-07-10 14:34:49.051674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.470 [2024-07-10 14:34:49.051693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.470 [2024-07-10 14:34:49.051709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.470 [2024-07-10 14:34:49.051733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113560 len:8 PRP1 0x0 PRP2 0x0 00:32:46.470 [2024-07-10 14:34:49.051771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.470 [2024-07-10 14:34:49.052039] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f3900 was disconnected and freed. reset controller. 00:32:46.470 [2024-07-10 14:34:49.052066] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:46.470 [2024-07-10 14:34:49.052088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.470 [2024-07-10 14:34:49.052155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:46.470 [2024-07-10 14:34:49.056127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.470 [2024-07-10 14:34:49.141860] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:46.470 00:32:46.470 Latency(us) 00:32:46.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.470 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:46.470 Verification LBA range: start 0x0 length 0x4000 00:32:46.470 NVMe0n1 : 15.05 5947.13 23.23 623.86 0.00 19393.31 1128.68 46797.56 00:32:46.470 =================================================================================================================== 00:32:46.470 Total : 5947.13 23.23 623.86 0.00 19393.31 1128.68 46797.56 00:32:46.470 Received shutdown signal, test time was about 15.000000 seconds 00:32:46.470 00:32:46.470 Latency(us) 00:32:46.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.470 =================================================================================================================== 00:32:46.470 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:46.470 14:34:55 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:46.470 14:34:55 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:46.470 14:34:55 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:46.470 14:34:55 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1510847 00:32:46.470 14:34:55 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1510847 /var/tmp/bdevperf.sock 00:32:46.470 14:34:55 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:46.470 14:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1510847 ']' 00:32:46.470 14:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:46.470 14:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:46.471 14:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:46.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:46.471 14:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:46.471 14:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:47.459 14:34:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:47.459 14:34:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:32:47.459 14:34:56 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:47.716 [2024-07-10 14:34:57.083397] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:47.716 14:34:57 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:47.974 [2024-07-10 14:34:57.376312] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:47.974 14:34:57 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:48.539 NVMe0n1 00:32:48.539 14:34:57 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:48.796 00:32:48.796 14:34:58 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:49.361 00:32:49.361 14:34:58 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:49.361 14:34:58 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:49.619 14:34:58 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:49.877 14:34:59 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:53.161 14:35:02 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:53.161 14:35:02 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:53.161 14:35:02 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1511538 00:32:53.161 14:35:02 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:53.161 14:35:02 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1511538 00:32:54.092 0 00:32:54.092 14:35:03 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:54.092 [2024-07-10 14:34:55.946082] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:32:54.092 [2024-07-10 14:34:55.946242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1510847 ] 00:32:54.092 EAL: No free 2048 kB hugepages reported on node 1 00:32:54.092 [2024-07-10 14:34:56.076524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.092 [2024-07-10 14:34:56.308570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.092 [2024-07-10 14:34:59.119990] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:54.092 [2024-07-10 14:34:59.120119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.092 [2024-07-10 14:34:59.120151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.092 [2024-07-10 14:34:59.120194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.092 [2024-07-10 14:34:59.120216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.092 [2024-07-10 14:34:59.120237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.092 [2024-07-10 14:34:59.120259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.092 [2024-07-10 14:34:59.120280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.092 [2024-07-10 14:34:59.120301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.092 [2024-07-10 14:34:59.120320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.092 [2024-07-10 14:34:59.120404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.092 [2024-07-10 14:34:59.120471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:54.092 [2024-07-10 14:34:59.134264] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:54.092 Running I/O for 1 seconds... 00:32:54.092 00:32:54.092 Latency(us) 00:32:54.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.092 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:54.092 Verification LBA range: start 0x0 length 0x4000 00:32:54.092 NVMe0n1 : 1.02 6337.72 24.76 0.00 0.00 20111.32 3713.71 17864.63 00:32:54.092 =================================================================================================================== 00:32:54.092 Total : 6337.72 24.76 0.00 0.00 20111.32 3713.71 17864.63 00:32:54.092 14:35:03 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:54.092 14:35:03 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:54.349 14:35:03 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:54.606 14:35:04 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:54.607 14:35:04 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:54.864 14:35:04 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:55.122 14:35:04 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:58.400 14:35:07 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:58.400 14:35:07 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:58.400 14:35:07 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1510847 00:32:58.400 14:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1510847 ']' 00:32:58.400 14:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1510847 00:32:58.400 14:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:32:58.400 14:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:58.400 14:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1510847 00:32:58.658 14:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:58.658 14:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:58.658 14:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1510847' 00:32:58.658 killing process with pid 1510847 00:32:58.658 14:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1510847 00:32:58.658 14:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1510847 00:32:59.591 14:35:08 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:59.591 14:35:08 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:59.849 rmmod nvme_tcp 00:32:59.849 rmmod nvme_fabrics 00:32:59.849 rmmod nvme_keyring 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1508319 ']' 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1508319 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1508319 ']' 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1508319 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1508319 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1508319' 00:32:59.849 killing process with pid 1508319 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1508319 00:32:59.849 14:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1508319 00:33:01.221 14:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:01.221 14:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:01.221 14:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:01.221 14:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:01.221 14:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:01.221 14:35:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.221 14:35:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:01.221 14:35:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.117 14:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:03.118 00:33:03.118 real 0m39.507s 00:33:03.118 user 2m18.678s 00:33:03.118 sys 0m6.209s 00:33:03.118 14:35:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:03.118 14:35:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:03.118 ************************************ 00:33:03.118 END TEST nvmf_failover 00:33:03.118 ************************************ 00:33:03.376 14:35:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:03.376 14:35:12 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:03.376 14:35:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:03.376 14:35:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:03.376 14:35:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:03.376 ************************************ 00:33:03.376 START TEST nvmf_host_discovery 00:33:03.376 ************************************ 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:03.376 * Looking for test storage... 00:33:03.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:03.376 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:33:03.377 14:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:05.278 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:05.278 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:33:05.278 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:05.278 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:05.278 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:05.278 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:05.278 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:05.278 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:33:05.278 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:05.278 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:33:05.278 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:33:05.278 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:33:05.278 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:33:05.278 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:33:05.278 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:05.279 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:05.279 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:05.279 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:05.279 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:05.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:05.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:33:05.279 00:33:05.279 --- 10.0.0.2 ping statistics --- 00:33:05.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.279 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:05.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:05.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:33:05.279 00:33:05.279 --- 10.0.0.1 ping statistics --- 00:33:05.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.279 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:05.279 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:05.538 14:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:05.538 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:05.538 14:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:05.538 14:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:05.538 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1514389 00:33:05.538 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:05.538 14:35:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1514389 00:33:05.538 14:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1514389 ']' 00:33:05.538 14:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:05.538 14:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:05.538 14:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:05.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:05.538 14:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:05.538 14:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:05.538 [2024-07-10 14:35:14.856573] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:33:05.538 [2024-07-10 14:35:14.856723] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:05.538 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.538 [2024-07-10 14:35:14.993315] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.796 [2024-07-10 14:35:15.220210] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:05.796 [2024-07-10 14:35:15.220275] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:05.796 [2024-07-10 14:35:15.220314] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:05.796 [2024-07-10 14:35:15.220334] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:05.796 [2024-07-10 14:35:15.220352] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:05.796 [2024-07-10 14:35:15.220393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:06.360 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:06.360 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:33:06.360 14:35:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:06.360 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:06.360 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.618 [2024-07-10 14:35:15.847399] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.618 [2024-07-10 14:35:15.855636] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.618 null0 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.618 null1 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1514540 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1514540 /tmp/host.sock 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1514540 ']' 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:06.618 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:06.618 14:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.618 [2024-07-10 14:35:15.968752] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:33:06.618 [2024-07-10 14:35:15.968917] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1514540 ] 00:33:06.618 EAL: No free 2048 kB hugepages reported on node 1 00:33:06.876 [2024-07-10 14:35:16.099895] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.876 [2024-07-10 14:35:16.350684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.443 14:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:07.443 14:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:33:07.443 14:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:07.443 14:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:07.443 14:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.443 14:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.443 14:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.443 14:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:07.443 14:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.443 14:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.443 14:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.443 14:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:07.443 14:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:07.702 14:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:07.702 14:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:07.702 14:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.702 14:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.703 14:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:07.703 14:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:07.703 14:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.703 14:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:07.703 14:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:07.703 14:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:07.703 14:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.703 14:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:07.703 14:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.703 14:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:07.703 14:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:07.703 14:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.703 14:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:07.703 14:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:07.703 14:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.703 14:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.703 [2024-07-10 14:35:17.171319] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:07.703 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:07.960 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.960 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:07.960 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:07.960 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:07.960 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.960 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.960 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:07.960 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:07.960 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:07.960 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.960 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:07.960 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:07.960 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:07.960 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:07.960 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:07.960 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:07.960 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:07.960 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:07.960 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:07.960 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:33:07.961 14:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:33:08.526 [2024-07-10 14:35:17.969660] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:08.526 [2024-07-10 14:35:17.969699] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:08.526 [2024-07-10 14:35:17.969754] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:08.783 [2024-07-10 14:35:18.056087] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:09.041 [2024-07-10 14:35:18.282590] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:09.042 [2024-07-10 14:35:18.282621] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:09.042 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:09.300 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.300 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.300 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:09.300 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:09.300 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.300 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:09.300 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:09.300 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:09.300 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:09.300 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:09.300 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:09.300 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.301 [2024-07-10 14:35:18.607741] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:09.301 [2024-07-10 14:35:18.609056] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:09.301 [2024-07-10 14:35:18.609116] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.301 [2024-07-10 14:35:18.736132] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:09.301 14:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:33:09.865 [2024-07-10 14:35:19.044943] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:09.866 [2024-07-10 14:35:19.044990] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:09.866 [2024-07-10 14:35:19.045006] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.433 [2024-07-10 14:35:19.848331] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:10.433 [2024-07-10 14:35:19.848408] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:10.433 [2024-07-10 14:35:19.853314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.433 [2024-07-10 14:35:19.853376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.433 [2024-07-10 14:35:19.853403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.433 [2024-07-10 14:35:19.853431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.433 [2024-07-10 14:35:19.853455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.433 [2024-07-10 14:35:19.853485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.433 [2024-07-10 14:35:19.853505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.433 [2024-07-10 14:35:19.853525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.433 [2024-07-10 14:35:19.853544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:10.433 [2024-07-10 14:35:19.863326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:10.433 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.433 [2024-07-10 14:35:19.873368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:10.433 [2024-07-10 14:35:19.873677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.433 [2024-07-10 14:35:19.873717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:10.433 [2024-07-10 14:35:19.873741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:10.433 [2024-07-10 14:35:19.873774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:10.433 [2024-07-10 14:35:19.873807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:10.433 [2024-07-10 14:35:19.873829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:10.433 [2024-07-10 14:35:19.873867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:10.433 [2024-07-10 14:35:19.873904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.433 [2024-07-10 14:35:19.883498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:10.433 [2024-07-10 14:35:19.883749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.433 [2024-07-10 14:35:19.883785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:10.433 [2024-07-10 14:35:19.883807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:10.433 [2024-07-10 14:35:19.883839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:10.433 [2024-07-10 14:35:19.883869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:10.433 [2024-07-10 14:35:19.883889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:10.433 [2024-07-10 14:35:19.883908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:10.433 [2024-07-10 14:35:19.883936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.433 [2024-07-10 14:35:19.893630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:10.433 [2024-07-10 14:35:19.893882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.434 [2024-07-10 14:35:19.893920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:10.434 [2024-07-10 14:35:19.893943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:10.434 [2024-07-10 14:35:19.893975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:10.434 [2024-07-10 14:35:19.894005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:10.434 [2024-07-10 14:35:19.894027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:10.434 [2024-07-10 14:35:19.894052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:10.434 [2024-07-10 14:35:19.894094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.434 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.434 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:10.434 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:10.434 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:10.434 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:10.434 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:10.434 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:10.434 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:10.434 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:10.434 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.434 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:10.434 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.434 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:10.434 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:10.434 [2024-07-10 14:35:19.903773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:10.434 [2024-07-10 14:35:19.904019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.434 [2024-07-10 14:35:19.904057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:10.434 [2024-07-10 14:35:19.904080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:10.434 [2024-07-10 14:35:19.904113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:10.434 [2024-07-10 14:35:19.904143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:10.434 [2024-07-10 14:35:19.904164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:10.434 [2024-07-10 14:35:19.904182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:10.434 [2024-07-10 14:35:19.904210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.693 [2024-07-10 14:35:19.913896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:10.693 [2024-07-10 14:35:19.914141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.693 [2024-07-10 14:35:19.914179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:10.693 [2024-07-10 14:35:19.914202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:10.693 [2024-07-10 14:35:19.914234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:10.693 [2024-07-10 14:35:19.914264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:10.693 [2024-07-10 14:35:19.914285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:10.693 [2024-07-10 14:35:19.914304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:10.693 [2024-07-10 14:35:19.914331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.693 [2024-07-10 14:35:19.924002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:10.693 [2024-07-10 14:35:19.924246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.693 [2024-07-10 14:35:19.924282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:10.693 [2024-07-10 14:35:19.924305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:10.693 [2024-07-10 14:35:19.924337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:10.693 [2024-07-10 14:35:19.924367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:10.693 [2024-07-10 14:35:19.924387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:10.693 [2024-07-10 14:35:19.924405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:10.693 [2024-07-10 14:35:19.924441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.693 [2024-07-10 14:35:19.934114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:10.693 [2024-07-10 14:35:19.934441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.693 [2024-07-10 14:35:19.934477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:10.693 [2024-07-10 14:35:19.934499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:10.693 [2024-07-10 14:35:19.934530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:10.693 [2024-07-10 14:35:19.934560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:10.693 [2024-07-10 14:35:19.934596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:10.693 [2024-07-10 14:35:19.934614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:10.693 [2024-07-10 14:35:19.934641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.693 [2024-07-10 14:35:19.936949] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:10.693 [2024-07-10 14:35:19.936989] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:10.693 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:10.694 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:10.694 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:10.694 14:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:10.694 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.694 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.694 14:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.694 14:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.067 [2024-07-10 14:35:21.218714] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:12.067 [2024-07-10 14:35:21.218757] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:12.067 [2024-07-10 14:35:21.218807] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:12.067 [2024-07-10 14:35:21.305129] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:12.067 [2024-07-10 14:35:21.374033] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:12.067 [2024-07-10 14:35:21.374106] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.067 request: 00:33:12.067 { 00:33:12.067 "name": "nvme", 00:33:12.067 "trtype": "tcp", 00:33:12.067 "traddr": "10.0.0.2", 00:33:12.067 "adrfam": "ipv4", 00:33:12.067 "trsvcid": "8009", 00:33:12.067 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:12.067 "wait_for_attach": true, 00:33:12.067 "method": "bdev_nvme_start_discovery", 00:33:12.067 "req_id": 1 00:33:12.067 } 00:33:12.067 Got JSON-RPC error response 00:33:12.067 response: 00:33:12.067 { 00:33:12.067 "code": -17, 00:33:12.067 "message": "File exists" 00:33:12.067 } 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.067 request: 00:33:12.067 { 00:33:12.067 "name": "nvme_second", 00:33:12.067 "trtype": "tcp", 00:33:12.067 "traddr": "10.0.0.2", 00:33:12.067 "adrfam": "ipv4", 00:33:12.067 "trsvcid": "8009", 00:33:12.067 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:12.067 "wait_for_attach": true, 00:33:12.067 "method": "bdev_nvme_start_discovery", 00:33:12.067 "req_id": 1 00:33:12.067 } 00:33:12.067 Got JSON-RPC error response 00:33:12.067 response: 00:33:12.067 { 00:33:12.067 "code": -17, 00:33:12.067 "message": "File exists" 00:33:12.067 } 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:12.067 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:12.325 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.325 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:12.325 14:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:12.326 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:12.326 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:12.326 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:12.326 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:12.326 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:12.326 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:12.326 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:12.326 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.326 14:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.257 [2024-07-10 14:35:22.573836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.257 [2024-07-10 14:35:22.573900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=8010 00:33:13.257 [2024-07-10 14:35:22.573988] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:13.257 [2024-07-10 14:35:22.574015] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:13.257 [2024-07-10 14:35:22.574035] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:14.188 [2024-07-10 14:35:23.576356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.188 [2024-07-10 14:35:23.576441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3680 with addr=10.0.0.2, port=8010 00:33:14.188 [2024-07-10 14:35:23.576540] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:14.188 [2024-07-10 14:35:23.576562] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:14.188 [2024-07-10 14:35:23.576582] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:15.119 [2024-07-10 14:35:24.578326] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:15.119 request: 00:33:15.119 { 00:33:15.119 "name": "nvme_second", 00:33:15.119 "trtype": "tcp", 00:33:15.119 "traddr": "10.0.0.2", 00:33:15.119 "adrfam": "ipv4", 00:33:15.119 "trsvcid": "8010", 00:33:15.119 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:15.119 "wait_for_attach": false, 00:33:15.119 "attach_timeout_ms": 3000, 00:33:15.119 "method": "bdev_nvme_start_discovery", 00:33:15.119 "req_id": 1 00:33:15.119 } 00:33:15.119 Got JSON-RPC error response 00:33:15.119 response: 00:33:15.119 { 00:33:15.119 "code": -110, 00:33:15.119 "message": "Connection timed out" 00:33:15.119 } 00:33:15.119 14:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:15.119 14:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:15.119 14:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:15.119 14:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:15.119 14:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:15.119 14:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:15.119 14:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:15.119 14:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:15.119 14:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.119 14:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.119 14:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:15.119 14:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:15.119 14:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1514540 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:15.377 rmmod nvme_tcp 00:33:15.377 rmmod nvme_fabrics 00:33:15.377 rmmod nvme_keyring 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1514389 ']' 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1514389 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1514389 ']' 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1514389 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1514389 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1514389' 00:33:15.377 killing process with pid 1514389 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1514389 00:33:15.377 14:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1514389 00:33:16.751 14:35:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:16.751 14:35:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:16.751 14:35:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:16.751 14:35:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:16.751 14:35:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:16.751 14:35:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:16.751 14:35:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:16.751 14:35:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:18.672 14:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:18.672 00:33:18.672 real 0m15.409s 00:33:18.672 user 0m22.948s 00:33:18.672 sys 0m2.989s 00:33:18.672 14:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:18.672 14:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.672 ************************************ 00:33:18.672 END TEST nvmf_host_discovery 00:33:18.672 ************************************ 00:33:18.672 14:35:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:18.672 14:35:28 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:18.672 14:35:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:18.672 14:35:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:18.672 14:35:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:18.672 ************************************ 00:33:18.672 START TEST nvmf_host_multipath_status 00:33:18.672 ************************************ 00:33:18.672 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:18.672 * Looking for test storage... 00:33:18.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:18.672 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:18.672 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:33:18.931 14:35:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:20.831 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:20.831 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:20.831 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:20.831 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:20.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:20.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:33:20.831 00:33:20.831 --- 10.0.0.2 ping statistics --- 00:33:20.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.831 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:33:20.831 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:20.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:20.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:33:20.831 00:33:20.831 --- 10.0.0.1 ping statistics --- 00:33:20.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.831 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1517822 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1517822 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1517822 ']' 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:20.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:20.832 14:35:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:21.090 [2024-07-10 14:35:30.384851] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:33:21.090 [2024-07-10 14:35:30.384994] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:21.090 EAL: No free 2048 kB hugepages reported on node 1 00:33:21.090 [2024-07-10 14:35:30.541104] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:21.348 [2024-07-10 14:35:30.771148] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:21.348 [2024-07-10 14:35:30.771216] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:21.348 [2024-07-10 14:35:30.771259] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:21.348 [2024-07-10 14:35:30.771276] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:21.348 [2024-07-10 14:35:30.771293] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:21.348 [2024-07-10 14:35:30.771408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:21.348 [2024-07-10 14:35:30.771415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:21.914 14:35:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:21.914 14:35:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:33:21.914 14:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:21.914 14:35:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:21.914 14:35:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:21.914 14:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:21.914 14:35:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1517822 00:33:21.914 14:35:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:22.172 [2024-07-10 14:35:31.597852] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:22.172 14:35:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:22.738 Malloc0 00:33:22.738 14:35:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:22.738 14:35:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:22.996 14:35:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:23.254 [2024-07-10 14:35:32.667626] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:23.254 14:35:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:23.512 [2024-07-10 14:35:32.908261] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:23.512 14:35:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1518115 00:33:23.512 14:35:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:23.512 14:35:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:23.512 14:35:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1518115 /var/tmp/bdevperf.sock 00:33:23.512 14:35:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1518115 ']' 00:33:23.512 14:35:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:23.512 14:35:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:23.512 14:35:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:23.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:23.512 14:35:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:23.512 14:35:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:24.886 14:35:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:24.886 14:35:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:33:24.886 14:35:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:24.886 14:35:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:33:25.144 Nvme0n1 00:33:25.144 14:35:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:25.709 Nvme0n1 00:33:25.709 14:35:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:25.709 14:35:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:27.611 14:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:27.611 14:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:27.870 14:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:28.436 14:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:29.369 14:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:29.369 14:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:29.369 14:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.369 14:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:29.627 14:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.627 14:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:29.627 14:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.627 14:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:29.884 14:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:29.884 14:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:29.884 14:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.884 14:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:30.142 14:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.142 14:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:30.142 14:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.142 14:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:30.399 14:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.399 14:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:30.399 14:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.399 14:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:30.399 14:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.400 14:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:30.400 14:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.400 14:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:30.964 14:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.964 14:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:30.964 14:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:30.964 14:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:31.221 14:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:32.596 14:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:32.596 14:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:32.596 14:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.596 14:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:32.596 14:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:32.596 14:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:32.596 14:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.596 14:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:32.854 14:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:32.854 14:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:32.854 14:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.854 14:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:33.112 14:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.112 14:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:33.112 14:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.112 14:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:33.370 14:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.370 14:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:33.370 14:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.370 14:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:33.627 14:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.627 14:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:33.627 14:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.627 14:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:33.885 14:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.885 14:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:33.885 14:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:34.143 14:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:34.401 14:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:35.331 14:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:35.331 14:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:35.331 14:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.331 14:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:35.587 14:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:35.587 14:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:35.587 14:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.587 14:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:35.844 14:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:35.844 14:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:35.844 14:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.844 14:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:36.100 14:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.100 14:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:36.100 14:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.100 14:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:36.356 14:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.356 14:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:36.356 14:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.356 14:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:36.613 14:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.613 14:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:36.613 14:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.613 14:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:36.870 14:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.870 14:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:36.870 14:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:37.127 14:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:37.384 14:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:38.319 14:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:38.319 14:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:38.319 14:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.319 14:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:38.577 14:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:38.577 14:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:38.577 14:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.577 14:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:38.836 14:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:38.836 14:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:38.836 14:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.836 14:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:39.094 14:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.094 14:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:39.094 14:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.094 14:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:39.352 14:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.352 14:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:39.352 14:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.352 14:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:39.611 14:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.611 14:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:39.611 14:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.611 14:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:39.869 14:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:39.869 14:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:39.869 14:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:40.127 14:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:40.385 14:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:41.318 14:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:41.318 14:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:41.318 14:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.319 14:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:41.577 14:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:41.577 14:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:41.577 14:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.577 14:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:41.835 14:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:41.835 14:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:41.835 14:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.835 14:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:42.092 14:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.092 14:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:42.092 14:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.092 14:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:42.349 14:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.349 14:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:42.349 14:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.349 14:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:42.606 14:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:42.606 14:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:42.606 14:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.606 14:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:42.864 14:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:42.864 14:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:42.864 14:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:43.123 14:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:43.381 14:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:44.324 14:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:44.324 14:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:44.324 14:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.324 14:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:44.582 14:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:44.582 14:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:44.582 14:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.582 14:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:44.840 14:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.840 14:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:44.840 14:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.840 14:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:45.119 14:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:45.119 14:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:45.119 14:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.119 14:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:45.411 14:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:45.411 14:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:45.411 14:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.411 14:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:45.669 14:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:45.669 14:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:45.669 14:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.669 14:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:45.928 14:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:45.928 14:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:46.186 14:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:46.186 14:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:46.445 14:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:46.703 14:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:47.637 14:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:47.637 14:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:47.638 14:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:47.638 14:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:47.895 14:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:47.895 14:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:47.895 14:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:47.895 14:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:48.154 14:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.154 14:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:48.154 14:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.154 14:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:48.412 14:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.412 14:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:48.413 14:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.413 14:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:48.671 14:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.671 14:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:48.671 14:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.671 14:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:48.929 14:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.929 14:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:48.929 14:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.930 14:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:49.188 14:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.188 14:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:49.188 14:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:49.446 14:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:49.704 14:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:50.638 14:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:50.638 14:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:50.638 14:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.638 14:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:50.897 14:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:50.897 14:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:50.897 14:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.897 14:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:51.156 14:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.156 14:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:51.156 14:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.156 14:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:51.415 14:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.415 14:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:51.415 14:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.415 14:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:51.673 14:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.673 14:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:51.673 14:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.673 14:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:51.931 14:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.931 14:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:51.931 14:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.931 14:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:52.189 14:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.189 14:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:52.189 14:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:52.446 14:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:52.706 14:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:53.636 14:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:53.636 14:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:53.636 14:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.636 14:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:53.894 14:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.894 14:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:53.894 14:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.894 14:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:54.151 14:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.151 14:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:54.151 14:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.151 14:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:54.717 14:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.717 14:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:54.717 14:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.717 14:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:54.717 14:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.717 14:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:54.717 14:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.717 14:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:54.975 14:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.975 14:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:54.975 14:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.975 14:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:55.232 14:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.232 14:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:55.232 14:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:55.488 14:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:55.745 14:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:57.118 14:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:57.118 14:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:57.118 14:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.118 14:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:57.118 14:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:57.118 14:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:57.118 14:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.118 14:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:57.377 14:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:57.377 14:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:57.377 14:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.377 14:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:57.635 14:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:57.635 14:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:57.635 14:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.635 14:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:57.899 14:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:57.899 14:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:57.899 14:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.899 14:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:58.160 14:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:58.160 14:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:58.160 14:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.160 14:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:58.418 14:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:58.418 14:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1518115 00:33:58.418 14:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1518115 ']' 00:33:58.418 14:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1518115 00:33:58.418 14:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:33:58.418 14:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:58.418 14:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1518115 00:33:58.418 14:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:33:58.418 14:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:33:58.418 14:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1518115' 00:33:58.418 killing process with pid 1518115 00:33:58.418 14:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1518115 00:33:58.418 14:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1518115 00:33:58.981 Connection closed with partial response: 00:33:58.981 00:33:58.981 00:33:59.551 14:36:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1518115 00:33:59.551 14:36:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:59.551 [2024-07-10 14:35:32.999698] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:33:59.551 [2024-07-10 14:35:32.999870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1518115 ] 00:33:59.551 EAL: No free 2048 kB hugepages reported on node 1 00:33:59.551 [2024-07-10 14:35:33.127316] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:59.551 [2024-07-10 14:35:33.356980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:59.551 Running I/O for 90 seconds... 00:33:59.551 [2024-07-10 14:35:49.434988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.551 [2024-07-10 14:35:49.435103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:59.551 [2024-07-10 14:35:49.435210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.551 [2024-07-10 14:35:49.435240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:59.551 [2024-07-10 14:35:49.435299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.551 [2024-07-10 14:35:49.435342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.551 [2024-07-10 14:35:49.435380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.551 [2024-07-10 14:35:49.435407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.551 [2024-07-10 14:35:49.435454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.551 [2024-07-10 14:35:49.435483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:59.551 [2024-07-10 14:35:49.435520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.551 [2024-07-10 14:35:49.435546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:59.551 [2024-07-10 14:35:49.435583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.551 [2024-07-10 14:35:49.435609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:59.551 [2024-07-10 14:35:49.435644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.551 [2024-07-10 14:35:49.435671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:59.551 [2024-07-10 14:35:49.435707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.551 [2024-07-10 14:35:49.435748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:59.551 [2024-07-10 14:35:49.435784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.551 [2024-07-10 14:35:49.435826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:59.551 [2024-07-10 14:35:49.435864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.551 [2024-07-10 14:35:49.435899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:59.551 [2024-07-10 14:35:49.435937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.551 [2024-07-10 14:35:49.435962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:59.551 [2024-07-10 14:35:49.435999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.551 [2024-07-10 14:35:49.436024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:59.551 [2024-07-10 14:35:49.436060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.551 [2024-07-10 14:35:49.436086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:59.551 [2024-07-10 14:35:49.436122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.551 [2024-07-10 14:35:49.436162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:59.551 [2024-07-10 14:35:49.436200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.551 [2024-07-10 14:35:49.436241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:59.551 [2024-07-10 14:35:49.436279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.551 [2024-07-10 14:35:49.436304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:59.551 [2024-07-10 14:35:49.436339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.551 [2024-07-10 14:35:49.436365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.436400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:48312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.436434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.436473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.436499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.436535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.436560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.436595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.436621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.436657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:48344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.436682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.436723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.436749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.436785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.436810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.436846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.436871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.436907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.436932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.436982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.437007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.437059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.437085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.437121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.437178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.437218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.437245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.437282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.437307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.437811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.437844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.437890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.437917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.437956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.437981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.438026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.438052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.438106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.438132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.438170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.438194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.438231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.438256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.438294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.438318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.438355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.438379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.438417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.438466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.438507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.438533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.438572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.438599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.438637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.438663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.438701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.438726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.438779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.438820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.438868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.438899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.438939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.438965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.439004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.439030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.439068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.439093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.439132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.439158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.439196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.439222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.439260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.439301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.439340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.439364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.439403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.439454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.439499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.439525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.439565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.439590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.439630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.439656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.439694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.439725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.439780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.439805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.439843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.439868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.439906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.439931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.439969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.552 [2024-07-10 14:35:49.439998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.440035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.440060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.440097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.440122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.440175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.440201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.440239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.440264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.440300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.440325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.440362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.440387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:59.552 [2024-07-10 14:35:49.440449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.552 [2024-07-10 14:35:49.440476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.440515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.440540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.440584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.440611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.440650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.440676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.440713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.440738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.440791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.440817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.440853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.440877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.440914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.440939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.440976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.441000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.441038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.441063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.441100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.441125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.441162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.441186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.441224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.441250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.441287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.441313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.441355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.441382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.441443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.441470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.441511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.441536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.441575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.441601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.441641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.441666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.441719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.441745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.441782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.441806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.441844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.441868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.441905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.441930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.441968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.441992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.442029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.442054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.442091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.442115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.442153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.442186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.442224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.442249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.442286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.442314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.442352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.442377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.442436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.442464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.442504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.442530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.442568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.442596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.442635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.442661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.442931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.442962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.443012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.553 [2024-07-10 14:35:49.443039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.443084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.553 [2024-07-10 14:35:49.443110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.443153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.553 [2024-07-10 14:35:49.443178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.443220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.553 [2024-07-10 14:35:49.443251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.443295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.553 [2024-07-10 14:35:49.443321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.443363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.553 [2024-07-10 14:35:49.443388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.443439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.553 [2024-07-10 14:35:49.443466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.443510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.553 [2024-07-10 14:35:49.443536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.443578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.553 [2024-07-10 14:35:49.443605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.443648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.553 [2024-07-10 14:35:49.443673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.443715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.553 [2024-07-10 14:35:49.443741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.443784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.553 [2024-07-10 14:35:49.443811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.443853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.553 [2024-07-10 14:35:49.443878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.443921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.553 [2024-07-10 14:35:49.443947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.443989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.553 [2024-07-10 14:35:49.444015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.444056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.553 [2024-07-10 14:35:49.444083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.444130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.444157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.444200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.444226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.444270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.444296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.444339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.444365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.444407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.444455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.444503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.444531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:35:49.444576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:35:49.444602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:36:05.173373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.553 [2024-07-10 14:36:05.173488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:59.553 [2024-07-10 14:36:05.173547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.173575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.173612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.173654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.173692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.173718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.173770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.173795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.173838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.173864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.173898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.173923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.173956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.173981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.174015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.174040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.174075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.174099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.174134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.174159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.174210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.174234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.174283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.174309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.174343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.174387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.174469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.174500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.174547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.174573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.174609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.174635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.174670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.174701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.174755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.174780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.174816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.174841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.174884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.174908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.174942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.174967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.175002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.175026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.175061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.175085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.175120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.175160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.175194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.175219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.175252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.175276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.175310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.175334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.175368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.175391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.175431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.175478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.175516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.175541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.175575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.175600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.175635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.175660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.175694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.175719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.175770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.175794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.175828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.175852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.178665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.178703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.178764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.178791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.178827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.178851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.178886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.178911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.178946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.178970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.179005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.179029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.179068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.179093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.179128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.554 [2024-07-10 14:36:05.179151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.179185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.554 [2024-07-10 14:36:05.179208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.179241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.554 [2024-07-10 14:36:05.179266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.179299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.554 [2024-07-10 14:36:05.179323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.179357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.554 [2024-07-10 14:36:05.179381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.179438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.554 [2024-07-10 14:36:05.179464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.179500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.554 [2024-07-10 14:36:05.179540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:59.554 [2024-07-10 14:36:05.179578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.179603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.179638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.179663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.179697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.179722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.179772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.179796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.179839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.179880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.179916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.179941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.179977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.180001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.180036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.180061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.180096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.180121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.180170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.180195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.180228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.180253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.180287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.180311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.180826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.180857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.180918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.180945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.180982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.181008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.181044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.181069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.181106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.181136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.181189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.181214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.181249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.181274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.181324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.181349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.181383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.181422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.181469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.181495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.181530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.181554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.181589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.181614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.181667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.181692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.181744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.181769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.181803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.181828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.181862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.181887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.181922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.181966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.182017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.182042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.182077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.182117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.182153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.182192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.182236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.182278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.182344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.182371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.182445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.182474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.182512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.182538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.182574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.182600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.182635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.182660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.182696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.182736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.182772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.182812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.182847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.555 [2024-07-10 14:36:05.182875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.182911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.182935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.182968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.182992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.183026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.183050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.183085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.183109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.183142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.183167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.183201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.183224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.183258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.183282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.183315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.183339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.183373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.183397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.183454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.183480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.183516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.183541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.183576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.183601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.183640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.183666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.183700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.183724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.183774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.183798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.183831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.183855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.183888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.183912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.183946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.555 [2024-07-10 14:36:05.183970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:59.555 [2024-07-10 14:36:05.184685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.184719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.184775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.184803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.184841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.184867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.184909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.184935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.184971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.185022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.185060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.185101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.185156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.185182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.185225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.185250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.185303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.185329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.185364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.185390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.185448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.185478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.185514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.185540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.185576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.185602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.185637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.185663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.185698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.185724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.185760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.185786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.186744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.186778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.186820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.186847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.186883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.186915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.186952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.186994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.187045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.187069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.187103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.187127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.187161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.187186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.187220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.187244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.187278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.187303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.187354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.187381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.187441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.187469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.187506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.187532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.187569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.187594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.187629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.187654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.187690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.187738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.187801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.187827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.187861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.187887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.187935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.187963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.188012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.188039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.188075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.188100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.188135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.188160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.188194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.188219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.188254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.188294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.189866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.189912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.189966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.189993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.190028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.190070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.190122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.190149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.190192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.190217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.190252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.190277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.190311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.190336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.190385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.190419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.190481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.190508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.190544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.190570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.190606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.190631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.190667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.190693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.190748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.190773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.190822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.190846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.190879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.190902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.190935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.190958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.190995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.191019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.191052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.191076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.191108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.556 [2024-07-10 14:36:05.191131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.191163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.191187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.191219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.191242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:59.556 [2024-07-10 14:36:05.191274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.556 [2024-07-10 14:36:05.191298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.191331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.557 [2024-07-10 14:36:05.191354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.195481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.195518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.195563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.195591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.195629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.195655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.195691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.195731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.195767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.195807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.195841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.195871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.195907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.195943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.195980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.196004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.196038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.196062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.196095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.196120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.196152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.196176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.196210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.196250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.196286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.196310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.196363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.196437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.196482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.196508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.196545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.557 [2024-07-10 14:36:05.196570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.196606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.557 [2024-07-10 14:36:05.196632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.196668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.557 [2024-07-10 14:36:05.196711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.196763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.557 [2024-07-10 14:36:05.196802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.196836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.557 [2024-07-10 14:36:05.196859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.196891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.196914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.196947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.196970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.197003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.557 [2024-07-10 14:36:05.197027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.197060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.557 [2024-07-10 14:36:05.197082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.197115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.197138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.197188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.197213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.197246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.197271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.197322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.557 [2024-07-10 14:36:05.197349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.197384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.557 [2024-07-10 14:36:05.197433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.197473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.557 [2024-07-10 14:36:05.197500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.197540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.557 [2024-07-10 14:36:05.197567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.197602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.557 [2024-07-10 14:36:05.197643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.197678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.557 [2024-07-10 14:36:05.197722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.197757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.557 [2024-07-10 14:36:05.197795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.197828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.557 [2024-07-10 14:36:05.197851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.197883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.557 [2024-07-10 14:36:05.197906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.197938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.557 [2024-07-10 14:36:05.197961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.197994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.557 [2024-07-10 14:36:05.198017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.198049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.198072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:59.557 [2024-07-10 14:36:05.198105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.557 [2024-07-10 14:36:05.198129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.200689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.200730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.200775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.200803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.200862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.200889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.200923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.200948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.200983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.201008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.201057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.201081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.201129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.201152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.201184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.201207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.201259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.201300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.201336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.201362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.201436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.201479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.201531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.201559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.201597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.201622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.201657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.201682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.201736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.201766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.201815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.201839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.201871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.201894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.201925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.201948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.201980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.202004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.202035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.202058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.202091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.202114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.202146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.202169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.202201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.202224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.202256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.202279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.202310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.202333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.202366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.202389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.202445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.202476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.202513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.202539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.205199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.205234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.205276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.205302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.205351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.205389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.205448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.205476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.205527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.205552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.205586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.205610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.205643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.205667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.205700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.205724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.205771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.205794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.205825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.205848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.205880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.205903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.205956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.205982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.206017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.206057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.206099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.206125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.206172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.206199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.206249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.206274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.206308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.206331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.206380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.206405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.206465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.206493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.206528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.206554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.206589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.206614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.206649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.206690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.206740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.206765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.206818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.206842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.206875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.206898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.206929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.558 [2024-07-10 14:36:05.206952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.206984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.207007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.207039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.207061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.207093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.207116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.207148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.207171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:59.558 [2024-07-10 14:36:05.207203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.558 [2024-07-10 14:36:05.207226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.210616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.210654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.210717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.210771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.210814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.210855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.210907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.210932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.210966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.210995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.211030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.211054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.211088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.211126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.211160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.211183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.211215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.211238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.211270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.211293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.211325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.211349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.211381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.211404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.211460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.211486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.211521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.211546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.211579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.211603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.211635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.211659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.211693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.211720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.211769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.211792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.211824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.211847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.211879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.211903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.211934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.211971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.212006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.212046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.212080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.212104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.212152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.212179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.212229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.212254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.212288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.212312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.212348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.212372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.212406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.212459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.212497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.212523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.212578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.212604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.212638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.212662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.212695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.212719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.212778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.212802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.212835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.212859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.212892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.212915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.212948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.213013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.213051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.213075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.213109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.213132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.213165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.213189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.213223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.213246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.213280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.213304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.216209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.216259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.216318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.216345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.216381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.216422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.216472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.216498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.216534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.216560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.216596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.216621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.216656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.216681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.216718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.216777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.216815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.216857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.216894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.216919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.216954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.216979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.217015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.217041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.217091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.217122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.217172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.217195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.217228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.217251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.217283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.559 [2024-07-10 14:36:05.217316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.217348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.217371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.217418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.217451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.217487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.559 [2024-07-10 14:36:05.217511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:59.559 [2024-07-10 14:36:05.217544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.217569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.219902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.219934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.219984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.220010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.220044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.220068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.220101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.220124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.220155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.220183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.220217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.220241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.220274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.220297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.220329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.220352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.220385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.220422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.220469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.220530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.220569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.220612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.220654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.220681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.220730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.220758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.220808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.220832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.220865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.220889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.220921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.220945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.220977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.221001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.221054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.221078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.221111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.221134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.221166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.221206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.221241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.221265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.221323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.221349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.221383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.221408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.221469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.221498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.221548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.221590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.221635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.221660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.221695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.221720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.221770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.221795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.221828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.221870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.221908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.221947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.221979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.222002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.222034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.222056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.222088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.222111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.222143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.222166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.222198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.222220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.222252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.222275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.222308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.222332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.222364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.222401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.222442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.222492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.222545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.560 [2024-07-10 14:36:05.222572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.222607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.222632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.222668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.222698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.226877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.226914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.226985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.227014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.227049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.227074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.227107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.227131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.227183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.227208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.227258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.227283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.227332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.227358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.227393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.227417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.227481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.227509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:59.560 [2024-07-10 14:36:05.227545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.560 [2024-07-10 14:36:05.227570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.227620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.561 [2024-07-10 14:36:05.227646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.227681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.227711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.227761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.227786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.227835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.227860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.227894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.227919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.227953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.227977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.228011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.228035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.228081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.228105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.228139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.228163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.228212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.561 [2024-07-10 14:36:05.228236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.228269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.561 [2024-07-10 14:36:05.228293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.228343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.228368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.228437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.228465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.228501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.228527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.228566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.561 [2024-07-10 14:36:05.228605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.228650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.561 [2024-07-10 14:36:05.228680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.228745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.561 [2024-07-10 14:36:05.228772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.228807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.228848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.228890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.228914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.228948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.228972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.229019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.229043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.229075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.229098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.229129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.561 [2024-07-10 14:36:05.229152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.229184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.561 [2024-07-10 14:36:05.229207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.229238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.229261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.229292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.229315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.229354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.561 [2024-07-10 14:36:05.229378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.229439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.229465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.229518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.229544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.229579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.229604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:59.561 [2024-07-10 14:36:05.229656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.561 [2024-07-10 14:36:05.229686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:59.561 Received shutdown signal, test time was about 32.597026 seconds 00:33:59.561 00:33:59.561 Latency(us) 00:33:59.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:59.561 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:59.561 Verification LBA range: start 0x0 length 0x4000 00:33:59.561 Nvme0n1 : 32.60 5757.94 22.49 0.00 0.00 22193.24 794.93 4026531.84 00:33:59.561 =================================================================================================================== 00:33:59.561 Total : 5757.94 22.49 0.00 0.00 22193.24 794.93 4026531.84 00:33:59.561 14:36:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:59.561 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:59.561 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:59.561 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:59.561 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:59.561 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:33:59.561 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:59.561 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:33:59.561 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:59.561 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:59.561 rmmod nvme_tcp 00:33:59.818 rmmod nvme_fabrics 00:33:59.819 rmmod nvme_keyring 00:33:59.819 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:59.819 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:33:59.819 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:33:59.819 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1517822 ']' 00:33:59.819 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1517822 00:33:59.819 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1517822 ']' 00:33:59.819 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1517822 00:33:59.819 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:33:59.819 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:59.819 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1517822 00:33:59.819 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:59.819 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:59.819 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1517822' 00:33:59.819 killing process with pid 1517822 00:33:59.819 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1517822 00:33:59.819 14:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1517822 00:34:01.192 14:36:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:01.192 14:36:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:01.192 14:36:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:01.192 14:36:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:01.192 14:36:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:01.192 14:36:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:01.192 14:36:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:01.192 14:36:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:03.730 14:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:03.730 00:34:03.730 real 0m44.548s 00:34:03.730 user 2m11.122s 00:34:03.731 sys 0m10.922s 00:34:03.731 14:36:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:03.731 14:36:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:03.731 ************************************ 00:34:03.731 END TEST nvmf_host_multipath_status 00:34:03.731 ************************************ 00:34:03.731 14:36:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:03.731 14:36:12 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:03.731 14:36:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:03.731 14:36:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:03.731 14:36:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:03.731 ************************************ 00:34:03.731 START TEST nvmf_discovery_remove_ifc 00:34:03.731 ************************************ 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:03.731 * Looking for test storage... 00:34:03.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:34:03.731 14:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:05.634 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:05.634 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:05.634 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:05.634 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:05.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:05.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:34:05.634 00:34:05.634 --- 10.0.0.2 ping statistics --- 00:34:05.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.634 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:05.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:05.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:34:05.634 00:34:05.634 --- 10.0.0.1 ping statistics --- 00:34:05.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.634 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:05.634 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:05.635 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:05.635 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.635 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1525183 00:34:05.635 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:05.635 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1525183 00:34:05.635 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1525183 ']' 00:34:05.635 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:05.635 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:05.635 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:05.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:05.635 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:05.635 14:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.635 [2024-07-10 14:36:15.062486] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:34:05.635 [2024-07-10 14:36:15.062623] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:05.892 EAL: No free 2048 kB hugepages reported on node 1 00:34:05.892 [2024-07-10 14:36:15.199083] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:06.150 [2024-07-10 14:36:15.453061] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:06.150 [2024-07-10 14:36:15.453143] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:06.150 [2024-07-10 14:36:15.453173] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:06.150 [2024-07-10 14:36:15.453198] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:06.150 [2024-07-10 14:36:15.453220] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:06.150 [2024-07-10 14:36:15.453298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:06.715 14:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:06.715 14:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:34:06.715 14:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:06.715 14:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:06.715 14:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:06.715 14:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:06.715 14:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:06.715 14:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.715 14:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:06.715 [2024-07-10 14:36:16.012603] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:06.715 [2024-07-10 14:36:16.020802] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:06.715 null0 00:34:06.715 [2024-07-10 14:36:16.052672] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:06.715 14:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.715 14:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1525336 00:34:06.715 14:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:06.715 14:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1525336 /tmp/host.sock 00:34:06.715 14:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1525336 ']' 00:34:06.715 14:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:34:06.715 14:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:06.715 14:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:06.715 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:06.715 14:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:06.715 14:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:06.715 [2024-07-10 14:36:16.157384] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:34:06.715 [2024-07-10 14:36:16.157561] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1525336 ] 00:34:06.973 EAL: No free 2048 kB hugepages reported on node 1 00:34:06.973 [2024-07-10 14:36:16.302545] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:07.231 [2024-07-10 14:36:16.552755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:07.795 14:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:07.795 14:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:34:07.795 14:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:07.795 14:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:07.795 14:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.795 14:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:07.795 14:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.795 14:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:07.795 14:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.795 14:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:08.052 14:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.053 14:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:08.053 14:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.053 14:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:08.984 [2024-07-10 14:36:18.455103] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:08.984 [2024-07-10 14:36:18.455158] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:08.984 [2024-07-10 14:36:18.455198] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:09.241 [2024-07-10 14:36:18.542524] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:09.498 [2024-07-10 14:36:18.769601] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:09.498 [2024-07-10 14:36:18.769704] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:09.498 [2024-07-10 14:36:18.769818] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:09.498 [2024-07-10 14:36:18.769858] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:09.498 [2024-07-10 14:36:18.769914] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:09.498 [2024-07-10 14:36:18.773766] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f2780 was disconnected and freed. delete nvme_qpair. 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:09.498 14:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:10.868 14:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:10.868 14:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:10.868 14:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:10.868 14:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.868 14:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:10.868 14:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:10.868 14:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:10.868 14:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.868 14:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:10.868 14:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:11.858 14:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:11.858 14:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:11.858 14:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.859 14:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:11.859 14:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:11.859 14:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:11.859 14:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:11.859 14:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.859 14:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:11.859 14:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:12.793 14:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:12.793 14:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:12.793 14:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.793 14:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:12.793 14:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:12.793 14:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:12.793 14:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:12.793 14:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.793 14:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:12.793 14:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:13.729 14:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:13.729 14:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:13.729 14:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:13.729 14:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.729 14:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:13.729 14:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:13.729 14:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:13.729 14:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.729 14:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:13.729 14:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:14.662 14:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:14.662 14:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:14.662 14:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:14.662 14:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.662 14:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:14.662 14:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:14.662 14:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:14.662 14:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.662 14:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:14.662 14:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:14.920 [2024-07-10 14:36:24.211503] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:14.920 [2024-07-10 14:36:24.211595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.920 [2024-07-10 14:36:24.211638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.920 [2024-07-10 14:36:24.211679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.920 [2024-07-10 14:36:24.211736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.920 [2024-07-10 14:36:24.211778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.920 [2024-07-10 14:36:24.211818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.920 [2024-07-10 14:36:24.211859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.920 [2024-07-10 14:36:24.211897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.920 [2024-07-10 14:36:24.211938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.920 [2024-07-10 14:36:24.211974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.920 [2024-07-10 14:36:24.212010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:34:14.920 [2024-07-10 14:36:24.221511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:34:14.920 [2024-07-10 14:36:24.231577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:15.854 14:36:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:15.854 14:36:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:15.854 14:36:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.854 14:36:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:15.854 14:36:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:15.854 14:36:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:15.854 14:36:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:15.854 [2024-07-10 14:36:25.246470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:15.854 [2024-07-10 14:36:25.246557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:34:15.854 [2024-07-10 14:36:25.246609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:34:15.854 [2024-07-10 14:36:25.246701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:34:15.854 [2024-07-10 14:36:25.247548] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:15.854 [2024-07-10 14:36:25.247596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:15.854 [2024-07-10 14:36:25.247638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:15.854 [2024-07-10 14:36:25.247672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:15.854 [2024-07-10 14:36:25.247755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.854 [2024-07-10 14:36:25.247807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:15.854 14:36:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.854 14:36:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:15.854 14:36:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:16.788 [2024-07-10 14:36:26.250369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:16.788 [2024-07-10 14:36:26.250420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:16.788 [2024-07-10 14:36:26.250480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:16.788 [2024-07-10 14:36:26.250510] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:34:16.788 [2024-07-10 14:36:26.250556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.788 [2024-07-10 14:36:26.250625] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:16.788 [2024-07-10 14:36:26.250712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.788 [2024-07-10 14:36:26.250768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.788 [2024-07-10 14:36:26.250815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.788 [2024-07-10 14:36:26.250857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.788 [2024-07-10 14:36:26.250898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.788 [2024-07-10 14:36:26.250938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.788 [2024-07-10 14:36:26.250978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.788 [2024-07-10 14:36:26.251017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.788 [2024-07-10 14:36:26.251058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.788 [2024-07-10 14:36:26.251095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.788 [2024-07-10 14:36:26.251134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:34:16.788 [2024-07-10 14:36:26.251243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:34:16.788 [2024-07-10 14:36:26.252227] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:16.788 [2024-07-10 14:36:26.252267] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:34:16.788 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:16.788 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:16.788 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:16.788 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.788 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:16.788 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:16.788 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:17.046 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.046 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:17.046 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:17.046 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:17.046 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:17.046 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:17.046 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:17.046 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.046 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:17.046 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:17.046 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:17.046 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:17.046 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.046 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:17.046 14:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:17.980 14:36:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:17.980 14:36:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:17.980 14:36:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:17.980 14:36:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.980 14:36:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:17.980 14:36:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:17.980 14:36:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:17.980 14:36:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.980 14:36:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:17.980 14:36:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:18.914 [2024-07-10 14:36:28.312660] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:18.914 [2024-07-10 14:36:28.312715] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:18.914 [2024-07-10 14:36:28.312755] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:19.173 [2024-07-10 14:36:28.399115] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:19.173 14:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:19.173 14:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:19.173 14:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:19.173 14:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.173 14:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:19.173 14:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:19.173 14:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:19.173 14:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.173 [2024-07-10 14:36:28.462523] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:19.173 [2024-07-10 14:36:28.462592] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:19.173 [2024-07-10 14:36:28.462675] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:19.173 [2024-07-10 14:36:28.462712] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:19.173 [2024-07-10 14:36:28.462746] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:19.173 [2024-07-10 14:36:28.470203] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f2f00 was disconnected and freed. delete nvme_qpair. 00:34:19.173 14:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:19.173 14:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:20.106 14:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:20.106 14:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:20.106 14:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:20.106 14:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.106 14:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:20.106 14:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:20.106 14:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:20.106 14:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.106 14:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:20.106 14:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:20.106 14:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1525336 00:34:20.106 14:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1525336 ']' 00:34:20.106 14:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1525336 00:34:20.106 14:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:34:20.106 14:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:20.106 14:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1525336 00:34:20.106 14:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:20.106 14:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:20.106 14:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1525336' 00:34:20.106 killing process with pid 1525336 00:34:20.106 14:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1525336 00:34:20.106 14:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1525336 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:21.481 rmmod nvme_tcp 00:34:21.481 rmmod nvme_fabrics 00:34:21.481 rmmod nvme_keyring 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1525183 ']' 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1525183 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1525183 ']' 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1525183 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1525183 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1525183' 00:34:21.481 killing process with pid 1525183 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1525183 00:34:21.481 14:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1525183 00:34:22.855 14:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:22.855 14:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:22.855 14:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:22.855 14:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:22.855 14:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:22.855 14:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.855 14:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:22.855 14:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.757 14:36:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:24.757 00:34:24.757 real 0m21.300s 00:34:24.757 user 0m31.110s 00:34:24.757 sys 0m3.411s 00:34:24.757 14:36:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:24.757 14:36:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:24.757 ************************************ 00:34:24.757 END TEST nvmf_discovery_remove_ifc 00:34:24.757 ************************************ 00:34:24.757 14:36:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:24.757 14:36:34 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:24.757 14:36:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:24.757 14:36:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:24.757 14:36:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:24.757 ************************************ 00:34:24.757 START TEST nvmf_identify_kernel_target 00:34:24.757 ************************************ 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:24.757 * Looking for test storage... 00:34:24.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:24.757 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:34:24.758 14:36:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:26.658 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:26.658 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:26.658 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:26.658 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:26.659 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:26.659 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:26.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:26.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:34:26.918 00:34:26.918 --- 10.0.0.2 ping statistics --- 00:34:26.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.918 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:26.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:26.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:34:26.918 00:34:26.918 --- 10.0.0.1 ping statistics --- 00:34:26.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.918 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:26.918 14:36:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:27.853 Waiting for block devices as requested 00:34:28.112 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:28.112 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:28.371 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:28.371 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:28.371 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:28.371 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:28.630 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:28.630 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:28.630 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:28.630 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:28.889 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:28.889 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:28.889 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:28.889 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:29.152 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:29.152 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:29.152 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:29.410 No valid GPT data, bailing 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:34:29.410 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:34:29.411 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:34:29.411 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:34:29.411 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:34:29.411 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:29.411 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:29.411 00:34:29.411 Discovery Log Number of Records 2, Generation counter 2 00:34:29.411 =====Discovery Log Entry 0====== 00:34:29.411 trtype: tcp 00:34:29.411 adrfam: ipv4 00:34:29.411 subtype: current discovery subsystem 00:34:29.411 treq: not specified, sq flow control disable supported 00:34:29.411 portid: 1 00:34:29.411 trsvcid: 4420 00:34:29.411 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:29.411 traddr: 10.0.0.1 00:34:29.411 eflags: none 00:34:29.411 sectype: none 00:34:29.411 =====Discovery Log Entry 1====== 00:34:29.411 trtype: tcp 00:34:29.411 adrfam: ipv4 00:34:29.411 subtype: nvme subsystem 00:34:29.411 treq: not specified, sq flow control disable supported 00:34:29.411 portid: 1 00:34:29.411 trsvcid: 4420 00:34:29.411 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:29.411 traddr: 10.0.0.1 00:34:29.411 eflags: none 00:34:29.411 sectype: none 00:34:29.411 14:36:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:29.411 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:29.670 EAL: No free 2048 kB hugepages reported on node 1 00:34:29.670 ===================================================== 00:34:29.670 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:29.670 ===================================================== 00:34:29.670 Controller Capabilities/Features 00:34:29.670 ================================ 00:34:29.670 Vendor ID: 0000 00:34:29.670 Subsystem Vendor ID: 0000 00:34:29.670 Serial Number: e024009f55fc516e30e6 00:34:29.670 Model Number: Linux 00:34:29.670 Firmware Version: 6.7.0-68 00:34:29.670 Recommended Arb Burst: 0 00:34:29.670 IEEE OUI Identifier: 00 00 00 00:34:29.670 Multi-path I/O 00:34:29.670 May have multiple subsystem ports: No 00:34:29.670 May have multiple controllers: No 00:34:29.670 Associated with SR-IOV VF: No 00:34:29.670 Max Data Transfer Size: Unlimited 00:34:29.670 Max Number of Namespaces: 0 00:34:29.670 Max Number of I/O Queues: 1024 00:34:29.670 NVMe Specification Version (VS): 1.3 00:34:29.670 NVMe Specification Version (Identify): 1.3 00:34:29.670 Maximum Queue Entries: 1024 00:34:29.670 Contiguous Queues Required: No 00:34:29.670 Arbitration Mechanisms Supported 00:34:29.670 Weighted Round Robin: Not Supported 00:34:29.670 Vendor Specific: Not Supported 00:34:29.670 Reset Timeout: 7500 ms 00:34:29.670 Doorbell Stride: 4 bytes 00:34:29.670 NVM Subsystem Reset: Not Supported 00:34:29.670 Command Sets Supported 00:34:29.670 NVM Command Set: Supported 00:34:29.670 Boot Partition: Not Supported 00:34:29.670 Memory Page Size Minimum: 4096 bytes 00:34:29.670 Memory Page Size Maximum: 4096 bytes 00:34:29.670 Persistent Memory Region: Not Supported 00:34:29.670 Optional Asynchronous Events Supported 00:34:29.670 Namespace Attribute Notices: Not Supported 00:34:29.670 Firmware Activation Notices: Not Supported 00:34:29.670 ANA Change Notices: Not Supported 00:34:29.670 PLE Aggregate Log Change Notices: Not Supported 00:34:29.670 LBA Status Info Alert Notices: Not Supported 00:34:29.670 EGE Aggregate Log Change Notices: Not Supported 00:34:29.670 Normal NVM Subsystem Shutdown event: Not Supported 00:34:29.670 Zone Descriptor Change Notices: Not Supported 00:34:29.670 Discovery Log Change Notices: Supported 00:34:29.670 Controller Attributes 00:34:29.670 128-bit Host Identifier: Not Supported 00:34:29.670 Non-Operational Permissive Mode: Not Supported 00:34:29.670 NVM Sets: Not Supported 00:34:29.670 Read Recovery Levels: Not Supported 00:34:29.670 Endurance Groups: Not Supported 00:34:29.670 Predictable Latency Mode: Not Supported 00:34:29.670 Traffic Based Keep ALive: Not Supported 00:34:29.670 Namespace Granularity: Not Supported 00:34:29.670 SQ Associations: Not Supported 00:34:29.670 UUID List: Not Supported 00:34:29.670 Multi-Domain Subsystem: Not Supported 00:34:29.670 Fixed Capacity Management: Not Supported 00:34:29.670 Variable Capacity Management: Not Supported 00:34:29.670 Delete Endurance Group: Not Supported 00:34:29.670 Delete NVM Set: Not Supported 00:34:29.670 Extended LBA Formats Supported: Not Supported 00:34:29.670 Flexible Data Placement Supported: Not Supported 00:34:29.670 00:34:29.670 Controller Memory Buffer Support 00:34:29.670 ================================ 00:34:29.670 Supported: No 00:34:29.670 00:34:29.670 Persistent Memory Region Support 00:34:29.670 ================================ 00:34:29.670 Supported: No 00:34:29.670 00:34:29.670 Admin Command Set Attributes 00:34:29.670 ============================ 00:34:29.670 Security Send/Receive: Not Supported 00:34:29.670 Format NVM: Not Supported 00:34:29.670 Firmware Activate/Download: Not Supported 00:34:29.670 Namespace Management: Not Supported 00:34:29.670 Device Self-Test: Not Supported 00:34:29.670 Directives: Not Supported 00:34:29.670 NVMe-MI: Not Supported 00:34:29.670 Virtualization Management: Not Supported 00:34:29.670 Doorbell Buffer Config: Not Supported 00:34:29.670 Get LBA Status Capability: Not Supported 00:34:29.670 Command & Feature Lockdown Capability: Not Supported 00:34:29.670 Abort Command Limit: 1 00:34:29.670 Async Event Request Limit: 1 00:34:29.670 Number of Firmware Slots: N/A 00:34:29.670 Firmware Slot 1 Read-Only: N/A 00:34:29.670 Firmware Activation Without Reset: N/A 00:34:29.670 Multiple Update Detection Support: N/A 00:34:29.670 Firmware Update Granularity: No Information Provided 00:34:29.670 Per-Namespace SMART Log: No 00:34:29.670 Asymmetric Namespace Access Log Page: Not Supported 00:34:29.670 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:29.670 Command Effects Log Page: Not Supported 00:34:29.670 Get Log Page Extended Data: Supported 00:34:29.670 Telemetry Log Pages: Not Supported 00:34:29.670 Persistent Event Log Pages: Not Supported 00:34:29.670 Supported Log Pages Log Page: May Support 00:34:29.670 Commands Supported & Effects Log Page: Not Supported 00:34:29.670 Feature Identifiers & Effects Log Page:May Support 00:34:29.670 NVMe-MI Commands & Effects Log Page: May Support 00:34:29.670 Data Area 4 for Telemetry Log: Not Supported 00:34:29.670 Error Log Page Entries Supported: 1 00:34:29.670 Keep Alive: Not Supported 00:34:29.670 00:34:29.670 NVM Command Set Attributes 00:34:29.670 ========================== 00:34:29.670 Submission Queue Entry Size 00:34:29.670 Max: 1 00:34:29.670 Min: 1 00:34:29.670 Completion Queue Entry Size 00:34:29.670 Max: 1 00:34:29.670 Min: 1 00:34:29.670 Number of Namespaces: 0 00:34:29.670 Compare Command: Not Supported 00:34:29.670 Write Uncorrectable Command: Not Supported 00:34:29.670 Dataset Management Command: Not Supported 00:34:29.670 Write Zeroes Command: Not Supported 00:34:29.670 Set Features Save Field: Not Supported 00:34:29.670 Reservations: Not Supported 00:34:29.670 Timestamp: Not Supported 00:34:29.670 Copy: Not Supported 00:34:29.670 Volatile Write Cache: Not Present 00:34:29.670 Atomic Write Unit (Normal): 1 00:34:29.670 Atomic Write Unit (PFail): 1 00:34:29.670 Atomic Compare & Write Unit: 1 00:34:29.670 Fused Compare & Write: Not Supported 00:34:29.670 Scatter-Gather List 00:34:29.670 SGL Command Set: Supported 00:34:29.670 SGL Keyed: Not Supported 00:34:29.670 SGL Bit Bucket Descriptor: Not Supported 00:34:29.670 SGL Metadata Pointer: Not Supported 00:34:29.670 Oversized SGL: Not Supported 00:34:29.670 SGL Metadata Address: Not Supported 00:34:29.670 SGL Offset: Supported 00:34:29.670 Transport SGL Data Block: Not Supported 00:34:29.670 Replay Protected Memory Block: Not Supported 00:34:29.670 00:34:29.670 Firmware Slot Information 00:34:29.670 ========================= 00:34:29.670 Active slot: 0 00:34:29.670 00:34:29.670 00:34:29.670 Error Log 00:34:29.670 ========= 00:34:29.670 00:34:29.670 Active Namespaces 00:34:29.670 ================= 00:34:29.670 Discovery Log Page 00:34:29.670 ================== 00:34:29.670 Generation Counter: 2 00:34:29.670 Number of Records: 2 00:34:29.670 Record Format: 0 00:34:29.670 00:34:29.670 Discovery Log Entry 0 00:34:29.670 ---------------------- 00:34:29.670 Transport Type: 3 (TCP) 00:34:29.670 Address Family: 1 (IPv4) 00:34:29.670 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:29.670 Entry Flags: 00:34:29.670 Duplicate Returned Information: 0 00:34:29.670 Explicit Persistent Connection Support for Discovery: 0 00:34:29.670 Transport Requirements: 00:34:29.670 Secure Channel: Not Specified 00:34:29.670 Port ID: 1 (0x0001) 00:34:29.670 Controller ID: 65535 (0xffff) 00:34:29.670 Admin Max SQ Size: 32 00:34:29.670 Transport Service Identifier: 4420 00:34:29.670 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:29.670 Transport Address: 10.0.0.1 00:34:29.670 Discovery Log Entry 1 00:34:29.670 ---------------------- 00:34:29.670 Transport Type: 3 (TCP) 00:34:29.670 Address Family: 1 (IPv4) 00:34:29.670 Subsystem Type: 2 (NVM Subsystem) 00:34:29.670 Entry Flags: 00:34:29.670 Duplicate Returned Information: 0 00:34:29.670 Explicit Persistent Connection Support for Discovery: 0 00:34:29.670 Transport Requirements: 00:34:29.670 Secure Channel: Not Specified 00:34:29.670 Port ID: 1 (0x0001) 00:34:29.670 Controller ID: 65535 (0xffff) 00:34:29.670 Admin Max SQ Size: 32 00:34:29.670 Transport Service Identifier: 4420 00:34:29.670 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:29.670 Transport Address: 10.0.0.1 00:34:29.670 14:36:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:29.670 EAL: No free 2048 kB hugepages reported on node 1 00:34:29.929 get_feature(0x01) failed 00:34:29.929 get_feature(0x02) failed 00:34:29.929 get_feature(0x04) failed 00:34:29.929 ===================================================== 00:34:29.929 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:29.929 ===================================================== 00:34:29.929 Controller Capabilities/Features 00:34:29.929 ================================ 00:34:29.929 Vendor ID: 0000 00:34:29.929 Subsystem Vendor ID: 0000 00:34:29.929 Serial Number: 62fa16137451ccab1c74 00:34:29.929 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:29.929 Firmware Version: 6.7.0-68 00:34:29.929 Recommended Arb Burst: 6 00:34:29.929 IEEE OUI Identifier: 00 00 00 00:34:29.929 Multi-path I/O 00:34:29.929 May have multiple subsystem ports: Yes 00:34:29.929 May have multiple controllers: Yes 00:34:29.929 Associated with SR-IOV VF: No 00:34:29.929 Max Data Transfer Size: Unlimited 00:34:29.929 Max Number of Namespaces: 1024 00:34:29.929 Max Number of I/O Queues: 128 00:34:29.929 NVMe Specification Version (VS): 1.3 00:34:29.929 NVMe Specification Version (Identify): 1.3 00:34:29.929 Maximum Queue Entries: 1024 00:34:29.929 Contiguous Queues Required: No 00:34:29.929 Arbitration Mechanisms Supported 00:34:29.929 Weighted Round Robin: Not Supported 00:34:29.929 Vendor Specific: Not Supported 00:34:29.929 Reset Timeout: 7500 ms 00:34:29.929 Doorbell Stride: 4 bytes 00:34:29.929 NVM Subsystem Reset: Not Supported 00:34:29.929 Command Sets Supported 00:34:29.929 NVM Command Set: Supported 00:34:29.929 Boot Partition: Not Supported 00:34:29.929 Memory Page Size Minimum: 4096 bytes 00:34:29.930 Memory Page Size Maximum: 4096 bytes 00:34:29.930 Persistent Memory Region: Not Supported 00:34:29.930 Optional Asynchronous Events Supported 00:34:29.930 Namespace Attribute Notices: Supported 00:34:29.930 Firmware Activation Notices: Not Supported 00:34:29.930 ANA Change Notices: Supported 00:34:29.930 PLE Aggregate Log Change Notices: Not Supported 00:34:29.930 LBA Status Info Alert Notices: Not Supported 00:34:29.930 EGE Aggregate Log Change Notices: Not Supported 00:34:29.930 Normal NVM Subsystem Shutdown event: Not Supported 00:34:29.930 Zone Descriptor Change Notices: Not Supported 00:34:29.930 Discovery Log Change Notices: Not Supported 00:34:29.930 Controller Attributes 00:34:29.930 128-bit Host Identifier: Supported 00:34:29.930 Non-Operational Permissive Mode: Not Supported 00:34:29.930 NVM Sets: Not Supported 00:34:29.930 Read Recovery Levels: Not Supported 00:34:29.930 Endurance Groups: Not Supported 00:34:29.930 Predictable Latency Mode: Not Supported 00:34:29.930 Traffic Based Keep ALive: Supported 00:34:29.930 Namespace Granularity: Not Supported 00:34:29.930 SQ Associations: Not Supported 00:34:29.930 UUID List: Not Supported 00:34:29.930 Multi-Domain Subsystem: Not Supported 00:34:29.930 Fixed Capacity Management: Not Supported 00:34:29.930 Variable Capacity Management: Not Supported 00:34:29.930 Delete Endurance Group: Not Supported 00:34:29.930 Delete NVM Set: Not Supported 00:34:29.930 Extended LBA Formats Supported: Not Supported 00:34:29.930 Flexible Data Placement Supported: Not Supported 00:34:29.930 00:34:29.930 Controller Memory Buffer Support 00:34:29.930 ================================ 00:34:29.930 Supported: No 00:34:29.930 00:34:29.930 Persistent Memory Region Support 00:34:29.930 ================================ 00:34:29.930 Supported: No 00:34:29.930 00:34:29.930 Admin Command Set Attributes 00:34:29.930 ============================ 00:34:29.930 Security Send/Receive: Not Supported 00:34:29.930 Format NVM: Not Supported 00:34:29.930 Firmware Activate/Download: Not Supported 00:34:29.930 Namespace Management: Not Supported 00:34:29.930 Device Self-Test: Not Supported 00:34:29.930 Directives: Not Supported 00:34:29.930 NVMe-MI: Not Supported 00:34:29.930 Virtualization Management: Not Supported 00:34:29.930 Doorbell Buffer Config: Not Supported 00:34:29.930 Get LBA Status Capability: Not Supported 00:34:29.930 Command & Feature Lockdown Capability: Not Supported 00:34:29.930 Abort Command Limit: 4 00:34:29.930 Async Event Request Limit: 4 00:34:29.930 Number of Firmware Slots: N/A 00:34:29.930 Firmware Slot 1 Read-Only: N/A 00:34:29.930 Firmware Activation Without Reset: N/A 00:34:29.930 Multiple Update Detection Support: N/A 00:34:29.930 Firmware Update Granularity: No Information Provided 00:34:29.930 Per-Namespace SMART Log: Yes 00:34:29.930 Asymmetric Namespace Access Log Page: Supported 00:34:29.930 ANA Transition Time : 10 sec 00:34:29.930 00:34:29.930 Asymmetric Namespace Access Capabilities 00:34:29.930 ANA Optimized State : Supported 00:34:29.930 ANA Non-Optimized State : Supported 00:34:29.930 ANA Inaccessible State : Supported 00:34:29.930 ANA Persistent Loss State : Supported 00:34:29.930 ANA Change State : Supported 00:34:29.930 ANAGRPID is not changed : No 00:34:29.930 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:29.930 00:34:29.930 ANA Group Identifier Maximum : 128 00:34:29.930 Number of ANA Group Identifiers : 128 00:34:29.930 Max Number of Allowed Namespaces : 1024 00:34:29.930 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:29.930 Command Effects Log Page: Supported 00:34:29.930 Get Log Page Extended Data: Supported 00:34:29.930 Telemetry Log Pages: Not Supported 00:34:29.930 Persistent Event Log Pages: Not Supported 00:34:29.930 Supported Log Pages Log Page: May Support 00:34:29.930 Commands Supported & Effects Log Page: Not Supported 00:34:29.930 Feature Identifiers & Effects Log Page:May Support 00:34:29.930 NVMe-MI Commands & Effects Log Page: May Support 00:34:29.930 Data Area 4 for Telemetry Log: Not Supported 00:34:29.930 Error Log Page Entries Supported: 128 00:34:29.930 Keep Alive: Supported 00:34:29.930 Keep Alive Granularity: 1000 ms 00:34:29.930 00:34:29.930 NVM Command Set Attributes 00:34:29.930 ========================== 00:34:29.930 Submission Queue Entry Size 00:34:29.930 Max: 64 00:34:29.930 Min: 64 00:34:29.930 Completion Queue Entry Size 00:34:29.930 Max: 16 00:34:29.930 Min: 16 00:34:29.930 Number of Namespaces: 1024 00:34:29.930 Compare Command: Not Supported 00:34:29.930 Write Uncorrectable Command: Not Supported 00:34:29.930 Dataset Management Command: Supported 00:34:29.930 Write Zeroes Command: Supported 00:34:29.930 Set Features Save Field: Not Supported 00:34:29.930 Reservations: Not Supported 00:34:29.930 Timestamp: Not Supported 00:34:29.930 Copy: Not Supported 00:34:29.930 Volatile Write Cache: Present 00:34:29.930 Atomic Write Unit (Normal): 1 00:34:29.930 Atomic Write Unit (PFail): 1 00:34:29.930 Atomic Compare & Write Unit: 1 00:34:29.930 Fused Compare & Write: Not Supported 00:34:29.930 Scatter-Gather List 00:34:29.930 SGL Command Set: Supported 00:34:29.930 SGL Keyed: Not Supported 00:34:29.930 SGL Bit Bucket Descriptor: Not Supported 00:34:29.930 SGL Metadata Pointer: Not Supported 00:34:29.930 Oversized SGL: Not Supported 00:34:29.930 SGL Metadata Address: Not Supported 00:34:29.930 SGL Offset: Supported 00:34:29.930 Transport SGL Data Block: Not Supported 00:34:29.930 Replay Protected Memory Block: Not Supported 00:34:29.930 00:34:29.930 Firmware Slot Information 00:34:29.930 ========================= 00:34:29.930 Active slot: 0 00:34:29.930 00:34:29.930 Asymmetric Namespace Access 00:34:29.930 =========================== 00:34:29.930 Change Count : 0 00:34:29.930 Number of ANA Group Descriptors : 1 00:34:29.930 ANA Group Descriptor : 0 00:34:29.930 ANA Group ID : 1 00:34:29.930 Number of NSID Values : 1 00:34:29.930 Change Count : 0 00:34:29.930 ANA State : 1 00:34:29.930 Namespace Identifier : 1 00:34:29.930 00:34:29.930 Commands Supported and Effects 00:34:29.930 ============================== 00:34:29.930 Admin Commands 00:34:29.930 -------------- 00:34:29.930 Get Log Page (02h): Supported 00:34:29.930 Identify (06h): Supported 00:34:29.930 Abort (08h): Supported 00:34:29.930 Set Features (09h): Supported 00:34:29.930 Get Features (0Ah): Supported 00:34:29.930 Asynchronous Event Request (0Ch): Supported 00:34:29.930 Keep Alive (18h): Supported 00:34:29.930 I/O Commands 00:34:29.930 ------------ 00:34:29.930 Flush (00h): Supported 00:34:29.930 Write (01h): Supported LBA-Change 00:34:29.930 Read (02h): Supported 00:34:29.930 Write Zeroes (08h): Supported LBA-Change 00:34:29.930 Dataset Management (09h): Supported 00:34:29.930 00:34:29.930 Error Log 00:34:29.930 ========= 00:34:29.930 Entry: 0 00:34:29.930 Error Count: 0x3 00:34:29.930 Submission Queue Id: 0x0 00:34:29.930 Command Id: 0x5 00:34:29.930 Phase Bit: 0 00:34:29.930 Status Code: 0x2 00:34:29.930 Status Code Type: 0x0 00:34:29.931 Do Not Retry: 1 00:34:29.931 Error Location: 0x28 00:34:29.931 LBA: 0x0 00:34:29.931 Namespace: 0x0 00:34:29.931 Vendor Log Page: 0x0 00:34:29.931 ----------- 00:34:29.931 Entry: 1 00:34:29.931 Error Count: 0x2 00:34:29.931 Submission Queue Id: 0x0 00:34:29.931 Command Id: 0x5 00:34:29.931 Phase Bit: 0 00:34:29.931 Status Code: 0x2 00:34:29.931 Status Code Type: 0x0 00:34:29.931 Do Not Retry: 1 00:34:29.931 Error Location: 0x28 00:34:29.931 LBA: 0x0 00:34:29.931 Namespace: 0x0 00:34:29.931 Vendor Log Page: 0x0 00:34:29.931 ----------- 00:34:29.931 Entry: 2 00:34:29.931 Error Count: 0x1 00:34:29.931 Submission Queue Id: 0x0 00:34:29.931 Command Id: 0x4 00:34:29.931 Phase Bit: 0 00:34:29.931 Status Code: 0x2 00:34:29.931 Status Code Type: 0x0 00:34:29.931 Do Not Retry: 1 00:34:29.931 Error Location: 0x28 00:34:29.931 LBA: 0x0 00:34:29.931 Namespace: 0x0 00:34:29.931 Vendor Log Page: 0x0 00:34:29.931 00:34:29.931 Number of Queues 00:34:29.931 ================ 00:34:29.931 Number of I/O Submission Queues: 128 00:34:29.931 Number of I/O Completion Queues: 128 00:34:29.931 00:34:29.931 ZNS Specific Controller Data 00:34:29.931 ============================ 00:34:29.931 Zone Append Size Limit: 0 00:34:29.931 00:34:29.931 00:34:29.931 Active Namespaces 00:34:29.931 ================= 00:34:29.931 get_feature(0x05) failed 00:34:29.931 Namespace ID:1 00:34:29.931 Command Set Identifier: NVM (00h) 00:34:29.931 Deallocate: Supported 00:34:29.931 Deallocated/Unwritten Error: Not Supported 00:34:29.931 Deallocated Read Value: Unknown 00:34:29.931 Deallocate in Write Zeroes: Not Supported 00:34:29.931 Deallocated Guard Field: 0xFFFF 00:34:29.931 Flush: Supported 00:34:29.931 Reservation: Not Supported 00:34:29.931 Namespace Sharing Capabilities: Multiple Controllers 00:34:29.931 Size (in LBAs): 1953525168 (931GiB) 00:34:29.931 Capacity (in LBAs): 1953525168 (931GiB) 00:34:29.931 Utilization (in LBAs): 1953525168 (931GiB) 00:34:29.931 UUID: 67f49847-caeb-4cf3-b947-755699f6e5f6 00:34:29.931 Thin Provisioning: Not Supported 00:34:29.931 Per-NS Atomic Units: Yes 00:34:29.931 Atomic Boundary Size (Normal): 0 00:34:29.931 Atomic Boundary Size (PFail): 0 00:34:29.931 Atomic Boundary Offset: 0 00:34:29.931 NGUID/EUI64 Never Reused: No 00:34:29.931 ANA group ID: 1 00:34:29.931 Namespace Write Protected: No 00:34:29.931 Number of LBA Formats: 1 00:34:29.931 Current LBA Format: LBA Format #00 00:34:29.931 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:29.931 00:34:29.931 14:36:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:29.931 14:36:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:29.931 14:36:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:34:29.931 14:36:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:29.931 14:36:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:34:29.931 14:36:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:29.931 14:36:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:29.931 rmmod nvme_tcp 00:34:29.931 rmmod nvme_fabrics 00:34:29.931 14:36:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:29.931 14:36:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:34:29.931 14:36:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:34:29.931 14:36:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:34:29.931 14:36:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:29.931 14:36:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:29.931 14:36:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:29.931 14:36:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:29.931 14:36:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:29.931 14:36:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.931 14:36:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:29.931 14:36:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:31.834 14:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:31.834 14:36:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:31.834 14:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:31.834 14:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:34:31.834 14:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:31.834 14:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:31.834 14:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:31.834 14:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:31.834 14:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:34:31.834 14:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:34:31.834 14:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:33.211 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:33.211 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:33.211 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:33.211 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:33.211 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:33.211 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:33.211 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:33.211 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:33.211 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:33.211 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:33.211 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:33.211 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:33.211 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:33.211 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:33.211 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:33.211 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:34.144 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:34.401 00:34:34.401 real 0m9.591s 00:34:34.401 user 0m2.087s 00:34:34.401 sys 0m3.430s 00:34:34.401 14:36:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:34.401 14:36:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:34.401 ************************************ 00:34:34.401 END TEST nvmf_identify_kernel_target 00:34:34.401 ************************************ 00:34:34.401 14:36:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:34.401 14:36:43 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:34.401 14:36:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:34.401 14:36:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:34.401 14:36:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:34.401 ************************************ 00:34:34.401 START TEST nvmf_auth_host 00:34:34.401 ************************************ 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:34.401 * Looking for test storage... 00:34:34.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:34.401 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:34.402 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:34.402 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:34.402 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:34.402 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:34.402 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:34.402 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:34.402 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:34.402 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:34.402 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:34.402 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:34.402 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:34.402 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:34.402 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:34.402 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.402 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:34.402 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:34.402 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:34:34.402 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.304 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:36.304 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:34:36.304 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:36.304 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:36.304 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:36.304 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:36.304 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:36.304 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:34:36.304 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:36.304 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:34:36.304 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:34:36.304 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:34:36.304 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:34:36.304 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:34:36.304 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:34:36.304 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:36.304 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:36.304 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:36.305 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:36.305 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:36.305 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:36.305 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:36.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:36.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:34:36.305 00:34:36.305 --- 10.0.0.2 ping statistics --- 00:34:36.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:36.305 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:36.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:36.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:34:36.305 00:34:36.305 --- 10.0.0.1 ping statistics --- 00:34:36.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:36.305 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1532697 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1532697 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1532697 ']' 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:36.305 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9673f7083f7e1977efc2d296b2baf649 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gUX 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9673f7083f7e1977efc2d296b2baf649 0 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9673f7083f7e1977efc2d296b2baf649 0 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9673f7083f7e1977efc2d296b2baf649 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gUX 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gUX 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.gUX 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:37.678 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=388492143525f6291e4c4b54aa4502d0f0e6222f8cd68ea4cf1cfb06f1ff7634 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.hid 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 388492143525f6291e4c4b54aa4502d0f0e6222f8cd68ea4cf1cfb06f1ff7634 3 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 388492143525f6291e4c4b54aa4502d0f0e6222f8cd68ea4cf1cfb06f1ff7634 3 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=388492143525f6291e4c4b54aa4502d0f0e6222f8cd68ea4cf1cfb06f1ff7634 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.hid 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.hid 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.hid 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=52508e7feaf9598d232ab0d9331a9b6e930c0cef5c7296c2 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.rQB 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 52508e7feaf9598d232ab0d9331a9b6e930c0cef5c7296c2 0 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 52508e7feaf9598d232ab0d9331a9b6e930c0cef5c7296c2 0 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=52508e7feaf9598d232ab0d9331a9b6e930c0cef5c7296c2 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.rQB 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.rQB 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.rQB 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=06f054331c6ceae370c4be0ef3d4738d8d97092435a4baf4 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.vmV 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 06f054331c6ceae370c4be0ef3d4738d8d97092435a4baf4 2 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 06f054331c6ceae370c4be0ef3d4738d8d97092435a4baf4 2 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=06f054331c6ceae370c4be0ef3d4738d8d97092435a4baf4 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:34:37.679 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.vmV 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.vmV 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.vmV 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=692219e00943bb7f482f68ba89dc9436 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.16A 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 692219e00943bb7f482f68ba89dc9436 1 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 692219e00943bb7f482f68ba89dc9436 1 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=692219e00943bb7f482f68ba89dc9436 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.16A 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.16A 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.16A 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a461e2d1673342955aaec00857952faa 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.7dn 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a461e2d1673342955aaec00857952faa 1 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a461e2d1673342955aaec00857952faa 1 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a461e2d1673342955aaec00857952faa 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.7dn 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.7dn 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.7dn 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=548b0558b46157f245a0970e08db4d8b9301e036434926e0 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.iyS 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 548b0558b46157f245a0970e08db4d8b9301e036434926e0 2 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 548b0558b46157f245a0970e08db4d8b9301e036434926e0 2 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=548b0558b46157f245a0970e08db4d8b9301e036434926e0 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:34:37.679 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.iyS 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.iyS 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.iyS 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=77326f1c236263956090a1f06ef0cc23 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gGv 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 77326f1c236263956090a1f06ef0cc23 0 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 77326f1c236263956090a1f06ef0cc23 0 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=77326f1c236263956090a1f06ef0cc23 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gGv 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gGv 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.gGv 00:34:37.937 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7ac46952b1feb7c7c0d006106435a27e864adf9ebdb8e9f5eeb8034e422814b2 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.oL1 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7ac46952b1feb7c7c0d006106435a27e864adf9ebdb8e9f5eeb8034e422814b2 3 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7ac46952b1feb7c7c0d006106435a27e864adf9ebdb8e9f5eeb8034e422814b2 3 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7ac46952b1feb7c7c0d006106435a27e864adf9ebdb8e9f5eeb8034e422814b2 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.oL1 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.oL1 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.oL1 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1532697 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1532697 ']' 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:37.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:37.938 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gUX 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.hid ]] 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hid 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.rQB 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.vmV ]] 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vmV 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.16A 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.7dn ]] 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7dn 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.iyS 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.gGv ]] 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.gGv 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.oL1 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:38.196 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:39.569 Waiting for block devices as requested 00:34:39.569 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:39.569 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:39.570 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:39.828 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:39.828 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:39.828 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:39.828 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:40.086 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:40.086 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:40.086 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:40.344 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:40.344 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:40.344 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:40.344 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:40.602 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:40.602 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:40.602 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:41.168 No valid GPT data, bailing 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:41.168 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:41.426 00:34:41.426 Discovery Log Number of Records 2, Generation counter 2 00:34:41.426 =====Discovery Log Entry 0====== 00:34:41.426 trtype: tcp 00:34:41.426 adrfam: ipv4 00:34:41.426 subtype: current discovery subsystem 00:34:41.426 treq: not specified, sq flow control disable supported 00:34:41.426 portid: 1 00:34:41.426 trsvcid: 4420 00:34:41.426 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:41.426 traddr: 10.0.0.1 00:34:41.426 eflags: none 00:34:41.426 sectype: none 00:34:41.426 =====Discovery Log Entry 1====== 00:34:41.426 trtype: tcp 00:34:41.426 adrfam: ipv4 00:34:41.426 subtype: nvme subsystem 00:34:41.426 treq: not specified, sq flow control disable supported 00:34:41.426 portid: 1 00:34:41.426 trsvcid: 4420 00:34:41.426 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:41.426 traddr: 10.0.0.1 00:34:41.426 eflags: none 00:34:41.426 sectype: none 00:34:41.426 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:41.426 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:41.426 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: ]] 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.427 nvme0n1 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: ]] 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.427 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.685 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.685 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.685 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:41.685 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:41.685 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:41.685 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.685 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.685 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:41.685 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.685 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:41.685 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:41.685 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:41.685 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:41.685 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.685 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.685 nvme0n1 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: ]] 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.685 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.943 nvme0n1 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: ]] 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.943 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.944 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:41.944 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:41.944 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:41.944 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.944 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.944 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:41.944 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.944 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:41.944 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:41.944 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:41.944 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:41.944 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.944 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.233 nvme0n1 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: ]] 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.233 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.514 nvme0n1 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.514 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:42.515 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:42.515 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:42.515 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:42.515 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.515 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.773 nvme0n1 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: ]] 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.773 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.032 nvme0n1 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: ]] 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.032 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.290 nvme0n1 00:34:43.290 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.290 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.290 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.290 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.290 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.290 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.290 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.290 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.290 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.290 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.290 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.290 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.290 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:43.290 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.290 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: ]] 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.291 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.549 nvme0n1 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: ]] 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.549 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.806 nvme0n1 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.806 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.062 nvme0n1 00:34:44.062 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.062 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.062 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.062 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.062 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.062 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.062 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.062 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.062 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.062 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.062 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.062 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:44.062 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.062 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:44.062 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.062 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:44.062 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:44.062 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:44.062 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: ]] 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.063 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.320 nvme0n1 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: ]] 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.320 14:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.886 nvme0n1 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: ]] 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.886 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.145 nvme0n1 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: ]] 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.145 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.404 nvme0n1 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.404 14:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.663 nvme0n1 00:34:45.663 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.663 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.663 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.663 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.663 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.663 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: ]] 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.921 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.487 nvme0n1 00:34:46.487 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.487 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.487 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.487 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.487 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.487 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.487 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.487 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.487 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.487 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.487 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: ]] 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.488 14:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.054 nvme0n1 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: ]] 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:47.054 14:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:47.055 14:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:47.055 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:47.055 14:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.055 14:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.621 nvme0n1 00:34:47.621 14:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.621 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.621 14:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.621 14:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.621 14:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.621 14:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: ]] 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:47.621 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.622 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:47.622 14:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.622 14:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.622 14:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.622 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.622 14:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:47.622 14:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:47.622 14:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:47.622 14:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.622 14:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.622 14:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:47.622 14:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.622 14:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:47.622 14:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:47.622 14:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:47.622 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:47.622 14:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.622 14:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.188 nvme0n1 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.188 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:48.189 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:48.189 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.189 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:48.189 14:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.189 14:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.189 14:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.189 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.189 14:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:48.189 14:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:48.189 14:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:48.189 14:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.189 14:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.189 14:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:48.189 14:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.189 14:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:48.189 14:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:48.189 14:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:48.446 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:48.446 14:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.446 14:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.012 nvme0n1 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: ]] 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.012 14:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.946 nvme0n1 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: ]] 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.946 14:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.879 nvme0n1 00:34:50.879 14:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.879 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.879 14:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.879 14:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.879 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.879 14:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.879 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.879 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.879 14:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.879 14:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: ]] 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.135 14:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.064 nvme0n1 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: ]] 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.064 14:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.992 nvme0n1 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.992 14:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.922 nvme0n1 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: ]] 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.922 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.179 nvme0n1 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: ]] 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.179 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.437 nvme0n1 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: ]] 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.437 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.695 nvme0n1 00:34:54.695 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.695 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.695 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.695 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.695 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.695 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.695 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.695 14:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.695 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.695 14:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.695 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.695 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.695 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:54.695 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.695 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:54.695 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:54.695 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:54.695 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:34:54.695 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:34:54.695 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:54.695 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:54.695 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:34:54.695 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: ]] 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.696 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.954 nvme0n1 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.954 nvme0n1 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.954 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: ]] 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:55.212 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:55.213 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:55.213 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.213 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.213 nvme0n1 00:34:55.213 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.213 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.213 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.213 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.213 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.213 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.471 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.471 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.471 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.471 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.471 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.471 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.471 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:55.471 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.471 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:55.471 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:55.471 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: ]] 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.472 nvme0n1 00:34:55.472 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.730 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.730 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.730 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.730 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.730 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.730 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.730 14:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.730 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.730 14:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.730 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.730 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.730 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:55.730 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.730 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:55.730 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:55.730 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:55.730 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:34:55.730 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:34:55.730 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:55.730 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:55.730 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:34:55.730 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: ]] 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.731 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.989 nvme0n1 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: ]] 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.989 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.246 nvme0n1 00:34:56.246 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.246 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.246 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.246 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.246 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.246 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.246 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.246 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.246 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.246 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.246 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.246 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.246 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:56.246 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.246 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.247 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.504 nvme0n1 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: ]] 00:34:56.504 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.505 14:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.763 nvme0n1 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: ]] 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.763 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.021 nvme0n1 00:34:57.021 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.021 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.021 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.021 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.021 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.021 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.021 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.021 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.021 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.021 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: ]] 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.279 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.537 nvme0n1 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: ]] 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:57.537 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.538 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.538 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:57.538 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.538 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:57.538 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:57.538 14:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:57.538 14:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:57.538 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.538 14:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.795 nvme0n1 00:34:57.795 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.795 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.795 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.795 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.795 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.795 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.795 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.795 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.795 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.795 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.795 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.795 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.795 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:57.795 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.795 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:57.795 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:57.795 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:57.795 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.796 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.362 nvme0n1 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: ]] 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:58.362 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:58.363 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.363 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:58.363 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.363 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.363 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.363 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.363 14:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:58.363 14:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:58.363 14:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:58.363 14:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.363 14:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.363 14:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:58.363 14:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.363 14:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:58.363 14:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:58.363 14:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:58.363 14:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:58.363 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.363 14:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.929 nvme0n1 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: ]] 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.929 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.495 nvme0n1 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: ]] 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.495 14:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.062 nvme0n1 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: ]] 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:00.062 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:00.063 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.063 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.629 nvme0n1 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.629 14:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.196 nvme0n1 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: ]] 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.196 14:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.128 nvme0n1 00:35:02.128 14:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.128 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.128 14:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.128 14:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.128 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: ]] 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.385 14:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.319 nvme0n1 00:35:03.319 14:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.319 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.319 14:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.319 14:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.319 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.319 14:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.319 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.319 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: ]] 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.320 14:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.254 nvme0n1 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: ]] 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.254 14:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.512 14:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.512 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.512 14:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:04.512 14:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:04.512 14:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:04.512 14:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.512 14:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.512 14:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:04.512 14:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.512 14:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:04.512 14:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:04.512 14:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:04.512 14:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:04.512 14:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.512 14:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.446 nvme0n1 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.446 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:05.447 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:05.447 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:05.447 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.447 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:05.447 14:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.447 14:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.447 14:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.447 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.447 14:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:05.447 14:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:05.447 14:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:05.447 14:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.447 14:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.447 14:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:05.447 14:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.447 14:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:05.447 14:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:05.447 14:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:05.447 14:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:05.447 14:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.447 14:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.381 nvme0n1 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: ]] 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.381 14:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.639 nvme0n1 00:35:06.639 14:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.639 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.639 14:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.639 14:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.639 14:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.640 14:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: ]] 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.640 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.898 nvme0n1 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:06.898 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: ]] 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.899 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.157 nvme0n1 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: ]] 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.157 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.422 nvme0n1 00:35:07.422 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.422 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.422 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.423 nvme0n1 00:35:07.423 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.722 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.722 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.722 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.722 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.722 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.722 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.722 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.722 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.722 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.722 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.722 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: ]] 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.723 14:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.723 nvme0n1 00:35:07.723 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.723 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.723 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.723 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.723 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.723 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.991 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.991 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.991 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.991 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.991 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.991 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.991 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:07.991 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.991 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:07.991 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:07.991 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:07.991 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:35:07.991 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:35:07.991 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:07.991 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:07.991 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:35:07.991 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: ]] 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.992 nvme0n1 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: ]] 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.992 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.248 nvme0n1 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.248 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.505 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.505 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.505 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: ]] 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.506 nvme0n1 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.506 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.764 14:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.764 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.764 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:08.764 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.764 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.764 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:08.764 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:08.764 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:35:08.764 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:08.764 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.764 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:08.764 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:35:08.764 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:08.764 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:08.764 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.764 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.764 14:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.764 nvme0n1 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.764 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: ]] 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.022 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.280 nvme0n1 00:35:09.280 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.280 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.280 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.280 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.280 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.280 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.280 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.280 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.280 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: ]] 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.281 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.539 nvme0n1 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: ]] 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.539 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:09.540 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:09.540 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.540 14:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:09.540 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.540 14:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.540 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.540 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.540 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:09.540 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:09.540 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:09.540 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.540 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.540 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:09.540 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.540 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:09.540 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:09.540 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:09.540 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:09.540 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.540 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.106 nvme0n1 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: ]] 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.106 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.364 nvme0n1 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.364 14:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.623 nvme0n1 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: ]] 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.623 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.189 nvme0n1 00:35:11.189 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.189 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.189 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.189 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.189 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.189 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: ]] 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.447 14:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.012 nvme0n1 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: ]] 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.012 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.577 nvme0n1 00:35:12.577 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.577 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.577 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.577 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.577 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.577 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: ]] 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.578 14:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.141 nvme0n1 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.141 14:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.706 nvme0n1 00:35:13.706 14:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.706 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.706 14:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.706 14:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.706 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.706 14:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.706 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.706 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.706 14:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.706 14:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.706 14:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.706 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:13.706 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY3M2Y3MDgzZjdlMTk3N2VmYzJkMjk2YjJiYWY2NDmlI7Zr: 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: ]] 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzg4NDkyMTQzNTI1ZjYyOTFlNGM0YjU0YWE0NTAyZDBmMGU2MjIyZjhjZDY4ZWE0Y2YxY2ZiMDZmMWZmNzYzNKkfDMg=: 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.965 14:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.899 nvme0n1 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: ]] 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:14.899 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:14.900 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:14.900 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.900 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:14.900 14:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.900 14:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.900 14:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.900 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.900 14:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:14.900 14:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:14.900 14:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:14.900 14:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.900 14:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.900 14:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:14.900 14:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.900 14:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:14.900 14:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:14.900 14:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:14.900 14:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:14.900 14:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.900 14:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.832 nvme0n1 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjkyMjE5ZTAwOTQzYmI3ZjQ4MmY2OGJhODlkYzk0MzZqbXZt: 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: ]] 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2MWUyZDE2NzMzNDI5NTVhYWVjMDA4NTc5NTJmYWEOAu4b: 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.832 14:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.764 nvme0n1 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ4YjA1NThiNDYxNTdmMjQ1YTA5NzBlMDhkYjRkOGI5MzAxZTAzNjQzNDkyNmUwqPYKEA==: 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: ]] 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzczMjZmMWMyMzYyNjM5NTYwOTBhMWYwNmVmMGNjMjOCvQCK: 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.764 14:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.136 nvme0n1 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2FjNDY5NTJiMWZlYjdjN2MwZDAwNjEwNjQzNWEyN2U4NjRhZGY5ZWJkYjhlOWY1ZWViODAzNGU0MjI4MTRiMr2hUEo=: 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.136 14:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.070 nvme0n1 00:35:19.070 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.070 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.070 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.070 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.070 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.070 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.070 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTI1MDhlN2ZlYWY5NTk4ZDIzMmFiMGQ5MzMxYTliNmU5MzBjMGNlZjVjNzI5NmMy5OlH4Q==: 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: ]] 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZmMDU0MzMxYzZjZWFlMzcwYzRiZTBlZjNkNDczOGQ4ZDk3MDkyNDM1YTRiYWY0DHgkwg==: 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.071 request: 00:35:19.071 { 00:35:19.071 "name": "nvme0", 00:35:19.071 "trtype": "tcp", 00:35:19.071 "traddr": "10.0.0.1", 00:35:19.071 "adrfam": "ipv4", 00:35:19.071 "trsvcid": "4420", 00:35:19.071 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:19.071 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:19.071 "prchk_reftag": false, 00:35:19.071 "prchk_guard": false, 00:35:19.071 "hdgst": false, 00:35:19.071 "ddgst": false, 00:35:19.071 "method": "bdev_nvme_attach_controller", 00:35:19.071 "req_id": 1 00:35:19.071 } 00:35:19.071 Got JSON-RPC error response 00:35:19.071 response: 00:35:19.071 { 00:35:19.071 "code": -5, 00:35:19.071 "message": "Input/output error" 00:35:19.071 } 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.071 request: 00:35:19.071 { 00:35:19.071 "name": "nvme0", 00:35:19.071 "trtype": "tcp", 00:35:19.071 "traddr": "10.0.0.1", 00:35:19.071 "adrfam": "ipv4", 00:35:19.071 "trsvcid": "4420", 00:35:19.071 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:19.071 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:19.071 "prchk_reftag": false, 00:35:19.071 "prchk_guard": false, 00:35:19.071 "hdgst": false, 00:35:19.071 "ddgst": false, 00:35:19.071 "dhchap_key": "key2", 00:35:19.071 "method": "bdev_nvme_attach_controller", 00:35:19.071 "req_id": 1 00:35:19.071 } 00:35:19.071 Got JSON-RPC error response 00:35:19.071 response: 00:35:19.071 { 00:35:19.071 "code": -5, 00:35:19.071 "message": "Input/output error" 00:35:19.071 } 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:19.071 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.330 request: 00:35:19.330 { 00:35:19.330 "name": "nvme0", 00:35:19.330 "trtype": "tcp", 00:35:19.330 "traddr": "10.0.0.1", 00:35:19.330 "adrfam": "ipv4", 00:35:19.330 "trsvcid": "4420", 00:35:19.330 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:19.330 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:19.330 "prchk_reftag": false, 00:35:19.330 "prchk_guard": false, 00:35:19.330 "hdgst": false, 00:35:19.330 "ddgst": false, 00:35:19.330 "dhchap_key": "key1", 00:35:19.330 "dhchap_ctrlr_key": "ckey2", 00:35:19.330 "method": "bdev_nvme_attach_controller", 00:35:19.330 "req_id": 1 00:35:19.330 } 00:35:19.330 Got JSON-RPC error response 00:35:19.330 response: 00:35:19.330 { 00:35:19.330 "code": -5, 00:35:19.330 "message": "Input/output error" 00:35:19.330 } 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:19.330 rmmod nvme_tcp 00:35:19.330 rmmod nvme_fabrics 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1532697 ']' 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1532697 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1532697 ']' 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1532697 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1532697 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1532697' 00:35:19.330 killing process with pid 1532697 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1532697 00:35:19.330 14:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1532697 00:35:20.705 14:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:20.705 14:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:20.705 14:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:20.705 14:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:20.705 14:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:20.705 14:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:20.705 14:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:20.705 14:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:22.614 14:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:22.614 14:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:22.614 14:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:22.614 14:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:22.614 14:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:22.614 14:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:35:22.614 14:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:22.614 14:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:22.614 14:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:22.614 14:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:22.614 14:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:22.614 14:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:22.614 14:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:23.988 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:23.988 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:23.988 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:23.988 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:23.988 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:23.989 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:23.989 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:23.989 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:23.989 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:23.989 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:23.989 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:23.989 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:23.989 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:23.989 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:23.989 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:23.989 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:24.924 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:24.924 14:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.gUX /tmp/spdk.key-null.rQB /tmp/spdk.key-sha256.16A /tmp/spdk.key-sha384.iyS /tmp/spdk.key-sha512.oL1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:35:24.924 14:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:26.298 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:26.298 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:26.298 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:26.298 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:26.298 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:26.298 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:26.298 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:26.298 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:26.298 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:26.298 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:26.298 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:26.298 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:26.298 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:26.298 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:26.298 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:26.298 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:26.298 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:26.298 00:35:26.298 real 0m51.853s 00:35:26.298 user 0m49.105s 00:35:26.298 sys 0m6.189s 00:35:26.298 14:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:26.298 14:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.298 ************************************ 00:35:26.298 END TEST nvmf_auth_host 00:35:26.298 ************************************ 00:35:26.298 14:37:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:26.298 14:37:35 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:35:26.298 14:37:35 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:26.298 14:37:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:26.299 14:37:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:26.299 14:37:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:26.299 ************************************ 00:35:26.299 START TEST nvmf_digest 00:35:26.299 ************************************ 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:26.299 * Looking for test storage... 00:35:26.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:35:26.299 14:37:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:28.202 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:28.202 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:28.202 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:28.202 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:28.203 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:28.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:28.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:35:28.203 00:35:28.203 --- 10.0.0.2 ping statistics --- 00:35:28.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:28.203 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:35:28.203 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:28.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:28.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:35:28.462 00:35:28.462 --- 10.0.0.1 ping statistics --- 00:35:28.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:28.462 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:28.462 ************************************ 00:35:28.462 START TEST nvmf_digest_clean 00:35:28.462 ************************************ 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1542500 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1542500 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1542500 ']' 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:28.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:28.462 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:28.462 [2024-07-10 14:37:37.818474] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:35:28.462 [2024-07-10 14:37:37.818616] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:28.462 EAL: No free 2048 kB hugepages reported on node 1 00:35:28.721 [2024-07-10 14:37:37.948383] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:28.721 [2024-07-10 14:37:38.196844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:28.721 [2024-07-10 14:37:38.196930] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:28.721 [2024-07-10 14:37:38.196984] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:28.721 [2024-07-10 14:37:38.197029] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:28.721 [2024-07-10 14:37:38.197070] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:28.721 [2024-07-10 14:37:38.197141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:29.652 14:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:29.652 14:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:29.652 14:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:29.652 14:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:29.652 14:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:29.652 14:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:29.652 14:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:29.652 14:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:29.652 14:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:29.652 14:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.652 14:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:29.908 null0 00:35:29.908 [2024-07-10 14:37:39.178404] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:29.908 [2024-07-10 14:37:39.202637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:29.909 14:37:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.909 14:37:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:29.909 14:37:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:29.909 14:37:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:29.909 14:37:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:29.909 14:37:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:29.909 14:37:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:29.909 14:37:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:29.909 14:37:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1542654 00:35:29.909 14:37:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:29.909 14:37:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1542654 /var/tmp/bperf.sock 00:35:29.909 14:37:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1542654 ']' 00:35:29.909 14:37:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:29.909 14:37:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:29.909 14:37:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:29.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:29.909 14:37:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:29.909 14:37:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:29.909 [2024-07-10 14:37:39.294571] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:35:29.909 [2024-07-10 14:37:39.294731] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1542654 ] 00:35:29.909 EAL: No free 2048 kB hugepages reported on node 1 00:35:30.164 [2024-07-10 14:37:39.441794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.421 [2024-07-10 14:37:39.670861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:30.983 14:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:30.983 14:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:30.983 14:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:30.983 14:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:30.983 14:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:31.546 14:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:31.546 14:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:31.802 nvme0n1 00:35:31.802 14:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:31.802 14:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:32.059 Running I/O for 2 seconds... 00:35:33.958 00:35:33.958 Latency(us) 00:35:33.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:33.958 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:33.958 nvme0n1 : 2.01 14834.56 57.95 0.00 0.00 8615.10 4587.52 23204.60 00:35:33.958 =================================================================================================================== 00:35:33.958 Total : 14834.56 57.95 0.00 0.00 8615.10 4587.52 23204.60 00:35:33.958 0 00:35:33.958 14:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:33.958 14:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:33.958 14:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:33.958 14:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:33.958 14:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:33.958 | select(.opcode=="crc32c") 00:35:33.958 | "\(.module_name) \(.executed)"' 00:35:34.523 14:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:34.523 14:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:34.523 14:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:34.523 14:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:34.523 14:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1542654 00:35:34.523 14:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1542654 ']' 00:35:34.523 14:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1542654 00:35:34.523 14:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:34.523 14:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:34.523 14:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1542654 00:35:34.523 14:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:34.523 14:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:34.523 14:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1542654' 00:35:34.523 killing process with pid 1542654 00:35:34.523 14:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1542654 00:35:34.523 Received shutdown signal, test time was about 2.000000 seconds 00:35:34.523 00:35:34.523 Latency(us) 00:35:34.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:34.523 =================================================================================================================== 00:35:34.523 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:34.523 14:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1542654 00:35:35.457 14:37:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:35.457 14:37:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:35.457 14:37:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:35.457 14:37:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:35.457 14:37:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:35.457 14:37:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:35.457 14:37:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:35.457 14:37:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1543319 00:35:35.458 14:37:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:35.458 14:37:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1543319 /var/tmp/bperf.sock 00:35:35.458 14:37:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1543319 ']' 00:35:35.458 14:37:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:35.458 14:37:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:35.458 14:37:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:35.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:35.458 14:37:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:35.458 14:37:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:35.458 [2024-07-10 14:37:44.888302] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:35:35.458 [2024-07-10 14:37:44.888476] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1543319 ] 00:35:35.458 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:35.458 Zero copy mechanism will not be used. 00:35:35.716 EAL: No free 2048 kB hugepages reported on node 1 00:35:35.716 [2024-07-10 14:37:45.029596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.974 [2024-07-10 14:37:45.280626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:36.540 14:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:36.540 14:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:36.540 14:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:36.540 14:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:36.540 14:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:37.169 14:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:37.169 14:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:37.427 nvme0n1 00:35:37.427 14:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:37.427 14:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:37.686 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:37.686 Zero copy mechanism will not be used. 00:35:37.686 Running I/O for 2 seconds... 00:35:39.584 00:35:39.584 Latency(us) 00:35:39.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:39.584 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:39.584 nvme0n1 : 2.01 1986.97 248.37 0.00 0.00 8045.47 5946.79 13592.65 00:35:39.584 =================================================================================================================== 00:35:39.584 Total : 1986.97 248.37 0.00 0.00 8045.47 5946.79 13592.65 00:35:39.584 0 00:35:39.584 14:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:39.584 14:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:39.584 14:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:39.584 14:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:39.584 | select(.opcode=="crc32c") 00:35:39.584 | "\(.module_name) \(.executed)"' 00:35:39.584 14:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:39.842 14:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:39.842 14:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:39.842 14:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:39.842 14:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:39.842 14:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1543319 00:35:39.842 14:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1543319 ']' 00:35:39.842 14:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1543319 00:35:39.842 14:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:39.842 14:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:39.842 14:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1543319 00:35:39.842 14:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:39.842 14:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:39.842 14:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1543319' 00:35:39.842 killing process with pid 1543319 00:35:39.842 14:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1543319 00:35:39.842 Received shutdown signal, test time was about 2.000000 seconds 00:35:39.842 00:35:39.842 Latency(us) 00:35:39.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:39.842 =================================================================================================================== 00:35:39.842 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:39.842 14:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1543319 00:35:40.774 14:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:40.774 14:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:40.774 14:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:40.774 14:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:40.774 14:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:40.774 14:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:40.774 14:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:40.774 14:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1543984 00:35:40.774 14:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1543984 /var/tmp/bperf.sock 00:35:40.774 14:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:40.774 14:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1543984 ']' 00:35:40.774 14:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:40.774 14:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:40.774 14:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:40.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:40.774 14:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:40.774 14:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:41.031 [2024-07-10 14:37:50.321286] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:35:41.031 [2024-07-10 14:37:50.321465] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1543984 ] 00:35:41.031 EAL: No free 2048 kB hugepages reported on node 1 00:35:41.031 [2024-07-10 14:37:50.445164] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:41.289 [2024-07-10 14:37:50.697002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:41.854 14:37:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:41.854 14:37:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:41.854 14:37:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:41.854 14:37:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:41.854 14:37:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:42.420 14:37:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:42.420 14:37:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:42.984 nvme0n1 00:35:42.984 14:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:42.984 14:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:42.984 Running I/O for 2 seconds... 00:35:45.510 00:35:45.510 Latency(us) 00:35:45.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:45.510 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:45.510 nvme0n1 : 2.01 14602.21 57.04 0.00 0.00 8740.31 3592.34 12621.75 00:35:45.510 =================================================================================================================== 00:35:45.510 Total : 14602.21 57.04 0.00 0.00 8740.31 3592.34 12621.75 00:35:45.510 0 00:35:45.510 14:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:45.510 14:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:45.510 14:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:45.510 14:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:45.510 | select(.opcode=="crc32c") 00:35:45.510 | "\(.module_name) \(.executed)"' 00:35:45.510 14:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:45.510 14:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:45.510 14:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:45.510 14:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:45.510 14:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:45.510 14:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1543984 00:35:45.510 14:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1543984 ']' 00:35:45.510 14:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1543984 00:35:45.510 14:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:45.510 14:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:45.510 14:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1543984 00:35:45.510 14:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:45.510 14:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:45.510 14:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1543984' 00:35:45.510 killing process with pid 1543984 00:35:45.510 14:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1543984 00:35:45.510 Received shutdown signal, test time was about 2.000000 seconds 00:35:45.510 00:35:45.510 Latency(us) 00:35:45.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:45.511 =================================================================================================================== 00:35:45.511 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:45.511 14:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1543984 00:35:46.449 14:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:46.449 14:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:46.449 14:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:46.449 14:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:46.449 14:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:46.449 14:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:46.449 14:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:46.449 14:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1544627 00:35:46.449 14:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1544627 /var/tmp/bperf.sock 00:35:46.449 14:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:46.449 14:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1544627 ']' 00:35:46.449 14:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:46.449 14:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:46.449 14:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:46.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:46.449 14:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:46.449 14:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:46.449 [2024-07-10 14:37:55.882129] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:35:46.449 [2024-07-10 14:37:55.882277] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1544627 ] 00:35:46.449 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:46.449 Zero copy mechanism will not be used. 00:35:46.707 EAL: No free 2048 kB hugepages reported on node 1 00:35:46.707 [2024-07-10 14:37:56.014803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.965 [2024-07-10 14:37:56.265201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:47.529 14:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:47.529 14:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:47.529 14:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:47.529 14:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:47.529 14:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:48.094 14:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:48.094 14:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:48.351 nvme0n1 00:35:48.609 14:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:48.609 14:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:48.609 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:48.609 Zero copy mechanism will not be used. 00:35:48.609 Running I/O for 2 seconds... 00:35:50.508 00:35:50.508 Latency(us) 00:35:50.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:50.508 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:50.508 nvme0n1 : 2.01 2491.68 311.46 0.00 0.00 6403.15 3446.71 9709.04 00:35:50.508 =================================================================================================================== 00:35:50.508 Total : 2491.68 311.46 0.00 0.00 6403.15 3446.71 9709.04 00:35:50.508 0 00:35:50.508 14:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:50.508 14:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:50.766 14:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:50.766 14:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:50.766 14:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:50.766 | select(.opcode=="crc32c") 00:35:50.766 | "\(.module_name) \(.executed)"' 00:35:50.766 14:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:50.766 14:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:50.766 14:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:50.766 14:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:50.766 14:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1544627 00:35:50.766 14:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1544627 ']' 00:35:50.766 14:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1544627 00:35:50.766 14:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:50.766 14:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:50.766 14:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1544627 00:35:51.025 14:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:51.025 14:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:51.025 14:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1544627' 00:35:51.025 killing process with pid 1544627 00:35:51.025 14:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1544627 00:35:51.025 Received shutdown signal, test time was about 2.000000 seconds 00:35:51.025 00:35:51.025 Latency(us) 00:35:51.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.025 =================================================================================================================== 00:35:51.025 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:51.025 14:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1544627 00:35:51.961 14:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1542500 00:35:51.961 14:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1542500 ']' 00:35:51.961 14:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1542500 00:35:51.961 14:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:51.961 14:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:51.961 14:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1542500 00:35:51.961 14:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:51.961 14:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:51.961 14:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1542500' 00:35:51.961 killing process with pid 1542500 00:35:51.961 14:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1542500 00:35:51.961 14:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1542500 00:35:53.336 00:35:53.336 real 0m24.901s 00:35:53.336 user 0m47.095s 00:35:53.336 sys 0m4.679s 00:35:53.336 14:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:53.336 14:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:53.336 ************************************ 00:35:53.336 END TEST nvmf_digest_clean 00:35:53.336 ************************************ 00:35:53.336 14:38:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:35:53.336 14:38:02 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:53.336 14:38:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:53.336 14:38:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:53.336 14:38:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:53.336 ************************************ 00:35:53.336 START TEST nvmf_digest_error 00:35:53.336 ************************************ 00:35:53.336 14:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:35:53.336 14:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:53.336 14:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:53.336 14:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:53.336 14:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:53.336 14:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1545476 00:35:53.336 14:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:53.336 14:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1545476 00:35:53.336 14:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1545476 ']' 00:35:53.336 14:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:53.336 14:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:53.336 14:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:53.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:53.336 14:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:53.336 14:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:53.336 [2024-07-10 14:38:02.769054] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:35:53.336 [2024-07-10 14:38:02.769200] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:53.604 EAL: No free 2048 kB hugepages reported on node 1 00:35:53.604 [2024-07-10 14:38:02.900047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:53.867 [2024-07-10 14:38:03.123849] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:53.867 [2024-07-10 14:38:03.123913] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:53.867 [2024-07-10 14:38:03.123936] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:53.867 [2024-07-10 14:38:03.123957] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:53.867 [2024-07-10 14:38:03.123974] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:53.867 [2024-07-10 14:38:03.124015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:54.433 14:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:54.433 14:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:35:54.433 14:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:54.433 14:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:54.433 14:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:54.433 14:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:54.433 14:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:54.433 14:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.433 14:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:54.433 [2024-07-10 14:38:03.706296] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:54.433 14:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.433 14:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:54.433 14:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:54.433 14:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.433 14:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:54.690 null0 00:35:54.690 [2024-07-10 14:38:04.088586] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:54.690 [2024-07-10 14:38:04.112878] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:54.690 14:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.690 14:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:54.690 14:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:54.690 14:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:54.690 14:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:54.690 14:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:54.690 14:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1545629 00:35:54.690 14:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:54.690 14:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1545629 /var/tmp/bperf.sock 00:35:54.690 14:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1545629 ']' 00:35:54.690 14:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:54.690 14:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:54.690 14:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:54.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:54.690 14:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:54.690 14:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:54.948 [2024-07-10 14:38:04.196755] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:35:54.948 [2024-07-10 14:38:04.196885] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1545629 ] 00:35:54.948 EAL: No free 2048 kB hugepages reported on node 1 00:35:54.948 [2024-07-10 14:38:04.325981] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:55.205 [2024-07-10 14:38:04.579859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:55.769 14:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:55.769 14:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:35:55.769 14:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:55.769 14:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:56.026 14:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:56.026 14:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.026 14:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:56.026 14:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.026 14:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:56.026 14:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:56.284 nvme0n1 00:35:56.284 14:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:56.284 14:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.284 14:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:56.284 14:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.284 14:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:56.284 14:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:56.542 Running I/O for 2 seconds... 00:35:56.542 [2024-07-10 14:38:05.854737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.542 [2024-07-10 14:38:05.854834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.542 [2024-07-10 14:38:05.854866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.542 [2024-07-10 14:38:05.872153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.542 [2024-07-10 14:38:05.872203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.542 [2024-07-10 14:38:05.872233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.542 [2024-07-10 14:38:05.890579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.542 [2024-07-10 14:38:05.890624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.542 [2024-07-10 14:38:05.890651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.542 [2024-07-10 14:38:05.908978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.542 [2024-07-10 14:38:05.909026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.542 [2024-07-10 14:38:05.909054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.542 [2024-07-10 14:38:05.925930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.542 [2024-07-10 14:38:05.925978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.542 [2024-07-10 14:38:05.926007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.542 [2024-07-10 14:38:05.944226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.542 [2024-07-10 14:38:05.944273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.542 [2024-07-10 14:38:05.944302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.542 [2024-07-10 14:38:05.958611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.542 [2024-07-10 14:38:05.958650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.542 [2024-07-10 14:38:05.958673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.542 [2024-07-10 14:38:05.977544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.542 [2024-07-10 14:38:05.977588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.542 [2024-07-10 14:38:05.977614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.543 [2024-07-10 14:38:05.995403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.543 [2024-07-10 14:38:05.995459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.543 [2024-07-10 14:38:05.995503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.543 [2024-07-10 14:38:06.016134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.543 [2024-07-10 14:38:06.016182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.543 [2024-07-10 14:38:06.016211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.801 [2024-07-10 14:38:06.032880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.801 [2024-07-10 14:38:06.032935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-07-10 14:38:06.032966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.801 [2024-07-10 14:38:06.051948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.801 [2024-07-10 14:38:06.052012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-07-10 14:38:06.052042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.801 [2024-07-10 14:38:06.072500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.801 [2024-07-10 14:38:06.072542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-07-10 14:38:06.072565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.801 [2024-07-10 14:38:06.090388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.801 [2024-07-10 14:38:06.090444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-07-10 14:38:06.090480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.801 [2024-07-10 14:38:06.107495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.801 [2024-07-10 14:38:06.107538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-07-10 14:38:06.107565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.801 [2024-07-10 14:38:06.125142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.801 [2024-07-10 14:38:06.125190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-07-10 14:38:06.125219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.801 [2024-07-10 14:38:06.141334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.801 [2024-07-10 14:38:06.141381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-07-10 14:38:06.141411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.801 [2024-07-10 14:38:06.159369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.801 [2024-07-10 14:38:06.159416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-07-10 14:38:06.159458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.801 [2024-07-10 14:38:06.176326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.801 [2024-07-10 14:38:06.176373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-07-10 14:38:06.176402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.801 [2024-07-10 14:38:06.194187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.801 [2024-07-10 14:38:06.194234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-07-10 14:38:06.194263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.801 [2024-07-10 14:38:06.212327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.801 [2024-07-10 14:38:06.212375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-07-10 14:38:06.212404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.801 [2024-07-10 14:38:06.229280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.801 [2024-07-10 14:38:06.229327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-07-10 14:38:06.229356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.801 [2024-07-10 14:38:06.248417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.801 [2024-07-10 14:38:06.248488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-07-10 14:38:06.248526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:56.801 [2024-07-10 14:38:06.263953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:56.801 [2024-07-10 14:38:06.264001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-07-10 14:38:06.264031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.059 [2024-07-10 14:38:06.284080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.059 [2024-07-10 14:38:06.284129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.059 [2024-07-10 14:38:06.284158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.059 [2024-07-10 14:38:06.299340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.059 [2024-07-10 14:38:06.299388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.059 [2024-07-10 14:38:06.299416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.059 [2024-07-10 14:38:06.320128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.059 [2024-07-10 14:38:06.320178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.059 [2024-07-10 14:38:06.320207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.059 [2024-07-10 14:38:06.338380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.059 [2024-07-10 14:38:06.338435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.059 [2024-07-10 14:38:06.338474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.059 [2024-07-10 14:38:06.354314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.059 [2024-07-10 14:38:06.354362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.059 [2024-07-10 14:38:06.354390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.059 [2024-07-10 14:38:06.374662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.059 [2024-07-10 14:38:06.374718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.059 [2024-07-10 14:38:06.374742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.059 [2024-07-10 14:38:06.397492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.059 [2024-07-10 14:38:06.397535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.059 [2024-07-10 14:38:06.397560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.059 [2024-07-10 14:38:06.418546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.059 [2024-07-10 14:38:06.418587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.059 [2024-07-10 14:38:06.418612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.060 [2024-07-10 14:38:06.440436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.060 [2024-07-10 14:38:06.440506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.060 [2024-07-10 14:38:06.440533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.060 [2024-07-10 14:38:06.456127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.060 [2024-07-10 14:38:06.456175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.060 [2024-07-10 14:38:06.456204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.060 [2024-07-10 14:38:06.475213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.060 [2024-07-10 14:38:06.475261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.060 [2024-07-10 14:38:06.475290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.060 [2024-07-10 14:38:06.492366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.060 [2024-07-10 14:38:06.492414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.060 [2024-07-10 14:38:06.492454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.060 [2024-07-10 14:38:06.509818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.060 [2024-07-10 14:38:06.509865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.060 [2024-07-10 14:38:06.509895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.060 [2024-07-10 14:38:06.527973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.060 [2024-07-10 14:38:06.528020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.060 [2024-07-10 14:38:06.528049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.318 [2024-07-10 14:38:06.544650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.318 [2024-07-10 14:38:06.544692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.318 [2024-07-10 14:38:06.544718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.318 [2024-07-10 14:38:06.562565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.318 [2024-07-10 14:38:06.562603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.318 [2024-07-10 14:38:06.562626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.318 [2024-07-10 14:38:06.577785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.318 [2024-07-10 14:38:06.577832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.318 [2024-07-10 14:38:06.577861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.318 [2024-07-10 14:38:06.597225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.318 [2024-07-10 14:38:06.597272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.318 [2024-07-10 14:38:06.597301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.318 [2024-07-10 14:38:06.617023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.318 [2024-07-10 14:38:06.617071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.318 [2024-07-10 14:38:06.617099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.318 [2024-07-10 14:38:06.635051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.318 [2024-07-10 14:38:06.635100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.318 [2024-07-10 14:38:06.635129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.318 [2024-07-10 14:38:06.652290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.318 [2024-07-10 14:38:06.652338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.318 [2024-07-10 14:38:06.652375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.318 [2024-07-10 14:38:06.672849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.318 [2024-07-10 14:38:06.672897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.318 [2024-07-10 14:38:06.672926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.318 [2024-07-10 14:38:06.689609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.318 [2024-07-10 14:38:06.689649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.318 [2024-07-10 14:38:06.689672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.318 [2024-07-10 14:38:06.708176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.318 [2024-07-10 14:38:06.708224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.318 [2024-07-10 14:38:06.708252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.318 [2024-07-10 14:38:06.727723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.318 [2024-07-10 14:38:06.727786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.318 [2024-07-10 14:38:06.727815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.318 [2024-07-10 14:38:06.742932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.318 [2024-07-10 14:38:06.742979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.318 [2024-07-10 14:38:06.743009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.318 [2024-07-10 14:38:06.761579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.318 [2024-07-10 14:38:06.761619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.318 [2024-07-10 14:38:06.761643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.318 [2024-07-10 14:38:06.777168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.318 [2024-07-10 14:38:06.777215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.318 [2024-07-10 14:38:06.777245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.318 [2024-07-10 14:38:06.796308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.318 [2024-07-10 14:38:06.796356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.318 [2024-07-10 14:38:06.796385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.577 [2024-07-10 14:38:06.815538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.577 [2024-07-10 14:38:06.815577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.577 [2024-07-10 14:38:06.815601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.577 [2024-07-10 14:38:06.834317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.577 [2024-07-10 14:38:06.834365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.577 [2024-07-10 14:38:06.834394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.577 [2024-07-10 14:38:06.852401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.577 [2024-07-10 14:38:06.852474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.577 [2024-07-10 14:38:06.852502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.577 [2024-07-10 14:38:06.866924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.577 [2024-07-10 14:38:06.866971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.577 [2024-07-10 14:38:06.866999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.577 [2024-07-10 14:38:06.885415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.577 [2024-07-10 14:38:06.885490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.577 [2024-07-10 14:38:06.885517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.577 [2024-07-10 14:38:06.904166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.577 [2024-07-10 14:38:06.904215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.577 [2024-07-10 14:38:06.904244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.577 [2024-07-10 14:38:06.923057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.577 [2024-07-10 14:38:06.923106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.577 [2024-07-10 14:38:06.923135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.577 [2024-07-10 14:38:06.944561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.577 [2024-07-10 14:38:06.944607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.577 [2024-07-10 14:38:06.944634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.577 [2024-07-10 14:38:06.959584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.577 [2024-07-10 14:38:06.959628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.577 [2024-07-10 14:38:06.959664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.577 [2024-07-10 14:38:06.977860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.577 [2024-07-10 14:38:06.977908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.577 [2024-07-10 14:38:06.977937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.577 [2024-07-10 14:38:06.996283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.577 [2024-07-10 14:38:06.996331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.577 [2024-07-10 14:38:06.996360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.577 [2024-07-10 14:38:07.013130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.577 [2024-07-10 14:38:07.013180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.577 [2024-07-10 14:38:07.013209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.577 [2024-07-10 14:38:07.032055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.577 [2024-07-10 14:38:07.032104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.577 [2024-07-10 14:38:07.032133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.577 [2024-07-10 14:38:07.051398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.577 [2024-07-10 14:38:07.051458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.577 [2024-07-10 14:38:07.051489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.836 [2024-07-10 14:38:07.067942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.836 [2024-07-10 14:38:07.067992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.836 [2024-07-10 14:38:07.068021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.836 [2024-07-10 14:38:07.088662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.836 [2024-07-10 14:38:07.088707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.836 [2024-07-10 14:38:07.088733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.836 [2024-07-10 14:38:07.106997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.836 [2024-07-10 14:38:07.107046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.836 [2024-07-10 14:38:07.107075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.836 [2024-07-10 14:38:07.125685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.836 [2024-07-10 14:38:07.125746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.836 [2024-07-10 14:38:07.125775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.836 [2024-07-10 14:38:07.140871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.836 [2024-07-10 14:38:07.140921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.836 [2024-07-10 14:38:07.140960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.836 [2024-07-10 14:38:07.161321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.836 [2024-07-10 14:38:07.161372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.836 [2024-07-10 14:38:07.161401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.836 [2024-07-10 14:38:07.179564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.836 [2024-07-10 14:38:07.179606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.836 [2024-07-10 14:38:07.179631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.836 [2024-07-10 14:38:07.200726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.836 [2024-07-10 14:38:07.200790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.836 [2024-07-10 14:38:07.200820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.836 [2024-07-10 14:38:07.215264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.836 [2024-07-10 14:38:07.215312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.836 [2024-07-10 14:38:07.215341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.836 [2024-07-10 14:38:07.237213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.836 [2024-07-10 14:38:07.237262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.836 [2024-07-10 14:38:07.237292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.836 [2024-07-10 14:38:07.256546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.836 [2024-07-10 14:38:07.256591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.836 [2024-07-10 14:38:07.256617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.836 [2024-07-10 14:38:07.273126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.836 [2024-07-10 14:38:07.273174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.836 [2024-07-10 14:38:07.273215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.836 [2024-07-10 14:38:07.292877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.836 [2024-07-10 14:38:07.292927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.836 [2024-07-10 14:38:07.292957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.836 [2024-07-10 14:38:07.311360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:57.836 [2024-07-10 14:38:07.311408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.836 [2024-07-10 14:38:07.311450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.094 [2024-07-10 14:38:07.328269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.094 [2024-07-10 14:38:07.328317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.094 [2024-07-10 14:38:07.328347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.094 [2024-07-10 14:38:07.348403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.094 [2024-07-10 14:38:07.348478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.094 [2024-07-10 14:38:07.348505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.094 [2024-07-10 14:38:07.364234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.094 [2024-07-10 14:38:07.364283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.094 [2024-07-10 14:38:07.364313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.095 [2024-07-10 14:38:07.386204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.095 [2024-07-10 14:38:07.386255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.095 [2024-07-10 14:38:07.386285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.095 [2024-07-10 14:38:07.405886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.095 [2024-07-10 14:38:07.405936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.095 [2024-07-10 14:38:07.405965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.095 [2024-07-10 14:38:07.421923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.095 [2024-07-10 14:38:07.421973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.095 [2024-07-10 14:38:07.422002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.095 [2024-07-10 14:38:07.441060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.095 [2024-07-10 14:38:07.441109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.095 [2024-07-10 14:38:07.441139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.095 [2024-07-10 14:38:07.460944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.095 [2024-07-10 14:38:07.461004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.095 [2024-07-10 14:38:07.461033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.095 [2024-07-10 14:38:07.477407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.095 [2024-07-10 14:38:07.477467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.095 [2024-07-10 14:38:07.477502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.095 [2024-07-10 14:38:07.497090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.095 [2024-07-10 14:38:07.497139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.095 [2024-07-10 14:38:07.497168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.095 [2024-07-10 14:38:07.514904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.095 [2024-07-10 14:38:07.514952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.095 [2024-07-10 14:38:07.514980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.095 [2024-07-10 14:38:07.532471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.095 [2024-07-10 14:38:07.532530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.095 [2024-07-10 14:38:07.532557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.095 [2024-07-10 14:38:07.550037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.095 [2024-07-10 14:38:07.550085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.095 [2024-07-10 14:38:07.550114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.095 [2024-07-10 14:38:07.569190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.095 [2024-07-10 14:38:07.569239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.095 [2024-07-10 14:38:07.569268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.353 [2024-07-10 14:38:07.589258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.353 [2024-07-10 14:38:07.589306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.353 [2024-07-10 14:38:07.589346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.353 [2024-07-10 14:38:07.605631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.353 [2024-07-10 14:38:07.605673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.353 [2024-07-10 14:38:07.605733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.353 [2024-07-10 14:38:07.626730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.353 [2024-07-10 14:38:07.626795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.353 [2024-07-10 14:38:07.626825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.353 [2024-07-10 14:38:07.643529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.354 [2024-07-10 14:38:07.643573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.354 [2024-07-10 14:38:07.643599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.354 [2024-07-10 14:38:07.661144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.354 [2024-07-10 14:38:07.661193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.354 [2024-07-10 14:38:07.661221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.354 [2024-07-10 14:38:07.679795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.354 [2024-07-10 14:38:07.679843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.354 [2024-07-10 14:38:07.679872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.354 [2024-07-10 14:38:07.699105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.354 [2024-07-10 14:38:07.699155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.354 [2024-07-10 14:38:07.699184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.354 [2024-07-10 14:38:07.715897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.354 [2024-07-10 14:38:07.715946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.354 [2024-07-10 14:38:07.715975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.354 [2024-07-10 14:38:07.734891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.354 [2024-07-10 14:38:07.734939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.354 [2024-07-10 14:38:07.734967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.354 [2024-07-10 14:38:07.752661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.354 [2024-07-10 14:38:07.752706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.354 [2024-07-10 14:38:07.752748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.354 [2024-07-10 14:38:07.770726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.354 [2024-07-10 14:38:07.770789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.354 [2024-07-10 14:38:07.770818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.354 [2024-07-10 14:38:07.787220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.354 [2024-07-10 14:38:07.787268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.354 [2024-07-10 14:38:07.787298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.354 [2024-07-10 14:38:07.805526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.354 [2024-07-10 14:38:07.805570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.354 [2024-07-10 14:38:07.805597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.354 [2024-07-10 14:38:07.825937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:58.354 [2024-07-10 14:38:07.825985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.354 [2024-07-10 14:38:07.826014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:58.612 00:35:58.612 Latency(us) 00:35:58.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:58.612 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:58.612 nvme0n1 : 2.01 13965.46 54.55 0.00 0.00 9153.91 4563.25 24272.59 00:35:58.612 =================================================================================================================== 00:35:58.612 Total : 13965.46 54.55 0.00 0.00 9153.91 4563.25 24272.59 00:35:58.612 0 00:35:58.612 14:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:58.612 14:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:58.612 14:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:58.612 | .driver_specific 00:35:58.612 | .nvme_error 00:35:58.612 | .status_code 00:35:58.612 | .command_transient_transport_error' 00:35:58.612 14:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:58.869 14:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 109 > 0 )) 00:35:58.869 14:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1545629 00:35:58.869 14:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1545629 ']' 00:35:58.869 14:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1545629 00:35:58.869 14:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:35:58.869 14:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:58.869 14:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1545629 00:35:58.869 14:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:58.869 14:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:58.869 14:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1545629' 00:35:58.869 killing process with pid 1545629 00:35:58.869 14:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1545629 00:35:58.869 Received shutdown signal, test time was about 2.000000 seconds 00:35:58.869 00:35:58.869 Latency(us) 00:35:58.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:58.869 =================================================================================================================== 00:35:58.869 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:58.869 14:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1545629 00:35:59.803 14:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:59.803 14:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:59.803 14:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:59.803 14:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:59.803 14:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:59.803 14:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1546171 00:35:59.803 14:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:59.803 14:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1546171 /var/tmp/bperf.sock 00:35:59.803 14:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1546171 ']' 00:35:59.803 14:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:59.803 14:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:59.803 14:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:59.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:59.803 14:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:59.803 14:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:59.803 [2024-07-10 14:38:09.279124] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:35:59.803 [2024-07-10 14:38:09.279292] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1546171 ] 00:35:59.803 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:59.803 Zero copy mechanism will not be used. 00:36:00.061 EAL: No free 2048 kB hugepages reported on node 1 00:36:00.061 [2024-07-10 14:38:09.407713] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:00.318 [2024-07-10 14:38:09.662927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:00.883 14:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:00.883 14:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:00.883 14:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:00.883 14:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:01.140 14:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:01.140 14:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.140 14:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:01.140 14:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.140 14:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:01.140 14:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:01.705 nvme0n1 00:36:01.705 14:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:01.705 14:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.705 14:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:01.705 14:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.705 14:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:01.705 14:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:01.705 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:01.705 Zero copy mechanism will not be used. 00:36:01.705 Running I/O for 2 seconds... 00:36:01.706 [2024-07-10 14:38:11.063701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.706 [2024-07-10 14:38:11.063793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.706 [2024-07-10 14:38:11.063823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:01.706 [2024-07-10 14:38:11.075866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.706 [2024-07-10 14:38:11.075911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.706 [2024-07-10 14:38:11.075937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:01.706 [2024-07-10 14:38:11.087900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.706 [2024-07-10 14:38:11.087950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.706 [2024-07-10 14:38:11.087980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:01.706 [2024-07-10 14:38:11.100153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.706 [2024-07-10 14:38:11.100202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.706 [2024-07-10 14:38:11.100232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.706 [2024-07-10 14:38:11.112403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.706 [2024-07-10 14:38:11.112476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.706 [2024-07-10 14:38:11.112518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:01.706 [2024-07-10 14:38:11.124408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.706 [2024-07-10 14:38:11.124465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.706 [2024-07-10 14:38:11.124510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:01.706 [2024-07-10 14:38:11.136444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.706 [2024-07-10 14:38:11.136502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.706 [2024-07-10 14:38:11.136528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:01.706 [2024-07-10 14:38:11.148608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.706 [2024-07-10 14:38:11.148650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.706 [2024-07-10 14:38:11.148676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.706 [2024-07-10 14:38:11.160745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.706 [2024-07-10 14:38:11.160809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.706 [2024-07-10 14:38:11.160838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:01.706 [2024-07-10 14:38:11.172810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.706 [2024-07-10 14:38:11.172858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.706 [2024-07-10 14:38:11.172888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:01.706 [2024-07-10 14:38:11.185044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.706 [2024-07-10 14:38:11.185092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.706 [2024-07-10 14:38:11.185122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:01.966 [2024-07-10 14:38:11.197473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.966 [2024-07-10 14:38:11.197540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.966 [2024-07-10 14:38:11.197570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.966 [2024-07-10 14:38:11.209642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.966 [2024-07-10 14:38:11.209683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.966 [2024-07-10 14:38:11.209708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:01.966 [2024-07-10 14:38:11.222031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.966 [2024-07-10 14:38:11.222079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.966 [2024-07-10 14:38:11.222108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:01.966 [2024-07-10 14:38:11.234072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.966 [2024-07-10 14:38:11.234120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.966 [2024-07-10 14:38:11.234150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:01.966 [2024-07-10 14:38:11.246442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.966 [2024-07-10 14:38:11.246503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.966 [2024-07-10 14:38:11.246529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.966 [2024-07-10 14:38:11.258683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.966 [2024-07-10 14:38:11.258741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.966 [2024-07-10 14:38:11.258772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:01.966 [2024-07-10 14:38:11.271045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.966 [2024-07-10 14:38:11.271092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.966 [2024-07-10 14:38:11.271121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:01.966 [2024-07-10 14:38:11.282985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.966 [2024-07-10 14:38:11.283033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.966 [2024-07-10 14:38:11.283062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:01.966 [2024-07-10 14:38:11.295030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.966 [2024-07-10 14:38:11.295078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.966 [2024-07-10 14:38:11.295108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.966 [2024-07-10 14:38:11.307021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.966 [2024-07-10 14:38:11.307069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.966 [2024-07-10 14:38:11.307098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:01.966 [2024-07-10 14:38:11.319110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.966 [2024-07-10 14:38:11.319157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.966 [2024-07-10 14:38:11.319195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:01.966 [2024-07-10 14:38:11.331569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.966 [2024-07-10 14:38:11.331620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.966 [2024-07-10 14:38:11.331649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:01.966 [2024-07-10 14:38:11.343645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.966 [2024-07-10 14:38:11.343686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.966 [2024-07-10 14:38:11.343712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.966 [2024-07-10 14:38:11.355775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.966 [2024-07-10 14:38:11.355831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.966 [2024-07-10 14:38:11.355861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:01.966 [2024-07-10 14:38:11.368105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.966 [2024-07-10 14:38:11.368154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.966 [2024-07-10 14:38:11.368183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:01.966 [2024-07-10 14:38:11.380129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.966 [2024-07-10 14:38:11.380177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.966 [2024-07-10 14:38:11.380206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:01.966 [2024-07-10 14:38:11.392407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.966 [2024-07-10 14:38:11.392464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.966 [2024-07-10 14:38:11.392509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.966 [2024-07-10 14:38:11.404522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.966 [2024-07-10 14:38:11.404567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.966 [2024-07-10 14:38:11.404594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:01.966 [2024-07-10 14:38:11.416827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.966 [2024-07-10 14:38:11.416875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.966 [2024-07-10 14:38:11.416905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:01.966 [2024-07-10 14:38:11.429038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.966 [2024-07-10 14:38:11.429086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.966 [2024-07-10 14:38:11.429116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:01.966 [2024-07-10 14:38:11.441305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.966 [2024-07-10 14:38:11.441353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.966 [2024-07-10 14:38:11.441383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.225 [2024-07-10 14:38:11.453779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.225 [2024-07-10 14:38:11.453829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.225 [2024-07-10 14:38:11.453859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:02.225 [2024-07-10 14:38:11.466036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.225 [2024-07-10 14:38:11.466083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.225 [2024-07-10 14:38:11.466112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:02.225 [2024-07-10 14:38:11.478321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.225 [2024-07-10 14:38:11.478369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.225 [2024-07-10 14:38:11.478399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:02.225 [2024-07-10 14:38:11.490442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.225 [2024-07-10 14:38:11.490515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.225 [2024-07-10 14:38:11.490543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.225 [2024-07-10 14:38:11.503058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.225 [2024-07-10 14:38:11.503105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.225 [2024-07-10 14:38:11.503135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:02.225 [2024-07-10 14:38:11.515596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.225 [2024-07-10 14:38:11.515638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.225 [2024-07-10 14:38:11.515664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:02.225 [2024-07-10 14:38:11.527641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.225 [2024-07-10 14:38:11.527681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.225 [2024-07-10 14:38:11.527714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:02.225 [2024-07-10 14:38:11.539641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.225 [2024-07-10 14:38:11.539682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.225 [2024-07-10 14:38:11.539706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.225 [2024-07-10 14:38:11.552173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.225 [2024-07-10 14:38:11.552220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.225 [2024-07-10 14:38:11.552250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:02.225 [2024-07-10 14:38:11.564449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.225 [2024-07-10 14:38:11.564511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.225 [2024-07-10 14:38:11.564536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:02.225 [2024-07-10 14:38:11.576834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.225 [2024-07-10 14:38:11.576882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.225 [2024-07-10 14:38:11.576912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:02.225 [2024-07-10 14:38:11.589514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.225 [2024-07-10 14:38:11.589557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.225 [2024-07-10 14:38:11.589583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.225 [2024-07-10 14:38:11.602170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.225 [2024-07-10 14:38:11.602219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.225 [2024-07-10 14:38:11.602249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:02.225 [2024-07-10 14:38:11.614319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.225 [2024-07-10 14:38:11.614370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.225 [2024-07-10 14:38:11.614399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:02.225 [2024-07-10 14:38:11.626341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.225 [2024-07-10 14:38:11.626388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.225 [2024-07-10 14:38:11.626417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:02.225 [2024-07-10 14:38:11.638400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.225 [2024-07-10 14:38:11.638473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.225 [2024-07-10 14:38:11.638500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.225 [2024-07-10 14:38:11.650491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.225 [2024-07-10 14:38:11.650532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.225 [2024-07-10 14:38:11.650556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:02.225 [2024-07-10 14:38:11.662787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.225 [2024-07-10 14:38:11.662834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.225 [2024-07-10 14:38:11.662864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:02.225 [2024-07-10 14:38:11.675007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.225 [2024-07-10 14:38:11.675053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.225 [2024-07-10 14:38:11.675082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:02.225 [2024-07-10 14:38:11.687010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.225 [2024-07-10 14:38:11.687058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.225 [2024-07-10 14:38:11.687087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.225 [2024-07-10 14:38:11.699396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.225 [2024-07-10 14:38:11.699455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.225 [2024-07-10 14:38:11.699485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:02.484 [2024-07-10 14:38:11.711632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.484 [2024-07-10 14:38:11.711676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.484 [2024-07-10 14:38:11.711701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:02.484 [2024-07-10 14:38:11.723744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.484 [2024-07-10 14:38:11.723793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.484 [2024-07-10 14:38:11.723821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:02.484 [2024-07-10 14:38:11.735671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.484 [2024-07-10 14:38:11.735714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.484 [2024-07-10 14:38:11.735766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.484 [2024-07-10 14:38:11.748113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.484 [2024-07-10 14:38:11.748169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.484 [2024-07-10 14:38:11.748199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:02.484 [2024-07-10 14:38:11.760593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.484 [2024-07-10 14:38:11.760634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.484 [2024-07-10 14:38:11.760659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:02.484 [2024-07-10 14:38:11.774261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.484 [2024-07-10 14:38:11.774324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.484 [2024-07-10 14:38:11.774354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:02.484 [2024-07-10 14:38:11.787046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.484 [2024-07-10 14:38:11.787094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.484 [2024-07-10 14:38:11.787123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.484 [2024-07-10 14:38:11.799638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.484 [2024-07-10 14:38:11.799680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.484 [2024-07-10 14:38:11.799705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:02.484 [2024-07-10 14:38:11.812391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.484 [2024-07-10 14:38:11.812466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.484 [2024-07-10 14:38:11.812500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:02.484 [2024-07-10 14:38:11.824649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.484 [2024-07-10 14:38:11.824691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.484 [2024-07-10 14:38:11.824731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:02.484 [2024-07-10 14:38:11.837031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.484 [2024-07-10 14:38:11.837078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.484 [2024-07-10 14:38:11.837107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.484 [2024-07-10 14:38:11.849764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.484 [2024-07-10 14:38:11.849820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.484 [2024-07-10 14:38:11.849850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:02.484 [2024-07-10 14:38:11.862018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.484 [2024-07-10 14:38:11.862075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.484 [2024-07-10 14:38:11.862105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:02.484 [2024-07-10 14:38:11.874176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.484 [2024-07-10 14:38:11.874233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.484 [2024-07-10 14:38:11.874262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:02.484 [2024-07-10 14:38:11.886365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.484 [2024-07-10 14:38:11.886422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.484 [2024-07-10 14:38:11.886461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.484 [2024-07-10 14:38:11.898474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.484 [2024-07-10 14:38:11.898540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.484 [2024-07-10 14:38:11.898565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:02.484 [2024-07-10 14:38:11.910270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.484 [2024-07-10 14:38:11.910323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.484 [2024-07-10 14:38:11.910352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:02.484 [2024-07-10 14:38:11.922381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.484 [2024-07-10 14:38:11.922443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.484 [2024-07-10 14:38:11.922488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:02.484 [2024-07-10 14:38:11.934361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.484 [2024-07-10 14:38:11.934417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.484 [2024-07-10 14:38:11.934456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.484 [2024-07-10 14:38:11.946435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.484 [2024-07-10 14:38:11.946496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.484 [2024-07-10 14:38:11.946529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:02.484 [2024-07-10 14:38:11.958350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.484 [2024-07-10 14:38:11.958407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.484 [2024-07-10 14:38:11.958446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:02.742 [2024-07-10 14:38:11.970651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.742 [2024-07-10 14:38:11.970701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.742 [2024-07-10 14:38:11.970741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:02.742 [2024-07-10 14:38:11.983015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.742 [2024-07-10 14:38:11.983072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.742 [2024-07-10 14:38:11.983101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.742 [2024-07-10 14:38:11.995075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.742 [2024-07-10 14:38:11.995129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.742 [2024-07-10 14:38:11.995159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:02.742 [2024-07-10 14:38:12.007481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.742 [2024-07-10 14:38:12.007545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.742 [2024-07-10 14:38:12.007570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:02.742 [2024-07-10 14:38:12.019661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.742 [2024-07-10 14:38:12.019714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.742 [2024-07-10 14:38:12.019757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:02.742 [2024-07-10 14:38:12.031846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.742 [2024-07-10 14:38:12.031899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.742 [2024-07-10 14:38:12.031929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.742 [2024-07-10 14:38:12.044016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.742 [2024-07-10 14:38:12.044071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.742 [2024-07-10 14:38:12.044101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:02.742 [2024-07-10 14:38:12.056032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.742 [2024-07-10 14:38:12.056078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.742 [2024-07-10 14:38:12.056113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:02.742 [2024-07-10 14:38:12.068297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.742 [2024-07-10 14:38:12.068344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.742 [2024-07-10 14:38:12.068373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:02.742 [2024-07-10 14:38:12.080621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.742 [2024-07-10 14:38:12.080670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.742 [2024-07-10 14:38:12.080695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.742 [2024-07-10 14:38:12.093326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.742 [2024-07-10 14:38:12.093383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.742 [2024-07-10 14:38:12.093413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:02.742 [2024-07-10 14:38:12.105703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.742 [2024-07-10 14:38:12.105773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.742 [2024-07-10 14:38:12.105803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:02.742 [2024-07-10 14:38:12.118468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.742 [2024-07-10 14:38:12.118531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.743 [2024-07-10 14:38:12.118558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:02.743 [2024-07-10 14:38:12.130678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.743 [2024-07-10 14:38:12.130733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.743 [2024-07-10 14:38:12.130758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.743 [2024-07-10 14:38:12.143145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.743 [2024-07-10 14:38:12.143192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.743 [2024-07-10 14:38:12.143220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:02.743 [2024-07-10 14:38:12.155392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.743 [2024-07-10 14:38:12.155447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.743 [2024-07-10 14:38:12.155499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:02.743 [2024-07-10 14:38:12.167461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.743 [2024-07-10 14:38:12.167518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.743 [2024-07-10 14:38:12.167542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:02.743 [2024-07-10 14:38:12.179285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.743 [2024-07-10 14:38:12.179331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.743 [2024-07-10 14:38:12.179359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.743 [2024-07-10 14:38:12.191470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.743 [2024-07-10 14:38:12.191511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.743 [2024-07-10 14:38:12.191536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:02.743 [2024-07-10 14:38:12.203543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.743 [2024-07-10 14:38:12.203584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.743 [2024-07-10 14:38:12.203610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:02.743 [2024-07-10 14:38:12.215727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:02.743 [2024-07-10 14:38:12.215788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.743 [2024-07-10 14:38:12.215817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:03.001 [2024-07-10 14:38:12.228247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.002 [2024-07-10 14:38:12.228296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.002 [2024-07-10 14:38:12.228326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.002 [2024-07-10 14:38:12.240454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.002 [2024-07-10 14:38:12.240513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.002 [2024-07-10 14:38:12.240551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:03.002 [2024-07-10 14:38:12.252543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.002 [2024-07-10 14:38:12.252582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.002 [2024-07-10 14:38:12.252607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:03.002 [2024-07-10 14:38:12.264417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.002 [2024-07-10 14:38:12.264486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.002 [2024-07-10 14:38:12.264511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:03.002 [2024-07-10 14:38:12.276414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.002 [2024-07-10 14:38:12.276469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.002 [2024-07-10 14:38:12.276499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.002 [2024-07-10 14:38:12.288437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.002 [2024-07-10 14:38:12.288497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.002 [2024-07-10 14:38:12.288524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:03.002 [2024-07-10 14:38:12.300495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.002 [2024-07-10 14:38:12.300534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.002 [2024-07-10 14:38:12.300559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:03.002 [2024-07-10 14:38:12.312563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.002 [2024-07-10 14:38:12.312602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.002 [2024-07-10 14:38:12.312627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:03.002 [2024-07-10 14:38:12.324443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.002 [2024-07-10 14:38:12.324518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.002 [2024-07-10 14:38:12.324543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.002 [2024-07-10 14:38:12.336315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.002 [2024-07-10 14:38:12.336362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.002 [2024-07-10 14:38:12.336390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:03.002 [2024-07-10 14:38:12.348445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.002 [2024-07-10 14:38:12.348503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.002 [2024-07-10 14:38:12.348528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:03.002 [2024-07-10 14:38:12.361093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.002 [2024-07-10 14:38:12.361140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.002 [2024-07-10 14:38:12.361177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:03.002 [2024-07-10 14:38:12.373099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.002 [2024-07-10 14:38:12.373146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.002 [2024-07-10 14:38:12.373174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.002 [2024-07-10 14:38:12.384933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.002 [2024-07-10 14:38:12.384980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.002 [2024-07-10 14:38:12.385008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:03.002 [2024-07-10 14:38:12.397010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.002 [2024-07-10 14:38:12.397057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.002 [2024-07-10 14:38:12.397086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:03.002 [2024-07-10 14:38:12.409089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.002 [2024-07-10 14:38:12.409136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.002 [2024-07-10 14:38:12.409164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:03.002 [2024-07-10 14:38:12.421263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.002 [2024-07-10 14:38:12.421310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.002 [2024-07-10 14:38:12.421339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.002 [2024-07-10 14:38:12.433571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.002 [2024-07-10 14:38:12.433620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.002 [2024-07-10 14:38:12.433645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:03.002 [2024-07-10 14:38:12.446112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.002 [2024-07-10 14:38:12.446158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.002 [2024-07-10 14:38:12.446187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:03.002 [2024-07-10 14:38:12.458163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.002 [2024-07-10 14:38:12.458208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.002 [2024-07-10 14:38:12.458236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:03.002 [2024-07-10 14:38:12.470272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.002 [2024-07-10 14:38:12.470318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.002 [2024-07-10 14:38:12.470349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.482557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.482598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.482623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.494465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.494511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.494540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.506205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.506251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.506280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.518539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.518579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.518602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.530783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.530830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.530859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.542954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.543001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.543030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.555111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.555157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.555185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.567514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.567553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.567585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.579380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.579433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.579464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.591644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.591682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.591706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.603712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.603771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.603799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.616388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.616445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.616491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.628399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.628455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.628499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.640249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.640289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.640313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.652330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.652375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.652404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.664420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.664476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.664517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.676524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.676563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.676588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.688435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.688495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.688520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.700234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.700273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.700298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.712105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.712151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.712180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.724118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.724163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.724192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:03.260 [2024-07-10 14:38:12.736189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.260 [2024-07-10 14:38:12.736235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.260 [2024-07-10 14:38:12.736264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:03.518 [2024-07-10 14:38:12.748704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.518 [2024-07-10 14:38:12.748767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.518 [2024-07-10 14:38:12.748795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:03.518 [2024-07-10 14:38:12.760728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.518 [2024-07-10 14:38:12.760802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.518 [2024-07-10 14:38:12.760831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.518 [2024-07-10 14:38:12.773065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.518 [2024-07-10 14:38:12.773112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.518 [2024-07-10 14:38:12.773148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:03.518 [2024-07-10 14:38:12.785208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.518 [2024-07-10 14:38:12.785254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.518 [2024-07-10 14:38:12.785283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:03.518 [2024-07-10 14:38:12.797347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.518 [2024-07-10 14:38:12.797393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.518 [2024-07-10 14:38:12.797422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:03.518 [2024-07-10 14:38:12.809644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.518 [2024-07-10 14:38:12.809684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.518 [2024-07-10 14:38:12.809709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.518 [2024-07-10 14:38:12.821835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.518 [2024-07-10 14:38:12.821880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.518 [2024-07-10 14:38:12.821909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:03.518 [2024-07-10 14:38:12.834036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.518 [2024-07-10 14:38:12.834083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.518 [2024-07-10 14:38:12.834111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:03.518 [2024-07-10 14:38:12.846460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.518 [2024-07-10 14:38:12.846519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.518 [2024-07-10 14:38:12.846543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:03.518 [2024-07-10 14:38:12.858683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.518 [2024-07-10 14:38:12.858733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.518 [2024-07-10 14:38:12.858757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.518 [2024-07-10 14:38:12.871335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.518 [2024-07-10 14:38:12.871382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.518 [2024-07-10 14:38:12.871420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:03.518 [2024-07-10 14:38:12.883863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.518 [2024-07-10 14:38:12.883911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.518 [2024-07-10 14:38:12.883939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:03.518 [2024-07-10 14:38:12.896037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.518 [2024-07-10 14:38:12.896085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.518 [2024-07-10 14:38:12.896113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:03.518 [2024-07-10 14:38:12.908274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.519 [2024-07-10 14:38:12.908321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.519 [2024-07-10 14:38:12.908350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.519 [2024-07-10 14:38:12.920414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.519 [2024-07-10 14:38:12.920500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.519 [2024-07-10 14:38:12.920525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:03.519 [2024-07-10 14:38:12.932923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.519 [2024-07-10 14:38:12.932970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.519 [2024-07-10 14:38:12.932999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:03.519 [2024-07-10 14:38:12.945095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.519 [2024-07-10 14:38:12.945141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.519 [2024-07-10 14:38:12.945170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:03.519 [2024-07-10 14:38:12.957277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.519 [2024-07-10 14:38:12.957323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.519 [2024-07-10 14:38:12.957353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.519 [2024-07-10 14:38:12.969543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.519 [2024-07-10 14:38:12.969582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.519 [2024-07-10 14:38:12.969607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:03.519 [2024-07-10 14:38:12.981610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.519 [2024-07-10 14:38:12.981649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.519 [2024-07-10 14:38:12.981682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:03.519 [2024-07-10 14:38:12.994004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.519 [2024-07-10 14:38:12.994050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.519 [2024-07-10 14:38:12.994078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:03.776 [2024-07-10 14:38:13.006328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.776 [2024-07-10 14:38:13.006375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.776 [2024-07-10 14:38:13.006403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.776 [2024-07-10 14:38:13.018767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.776 [2024-07-10 14:38:13.018813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.776 [2024-07-10 14:38:13.018843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:03.776 [2024-07-10 14:38:13.031031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.776 [2024-07-10 14:38:13.031078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.776 [2024-07-10 14:38:13.031106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:03.776 [2024-07-10 14:38:13.043179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.776 [2024-07-10 14:38:13.043225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.776 [2024-07-10 14:38:13.043254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:03.776 [2024-07-10 14:38:13.055044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.776 [2024-07-10 14:38:13.055100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.776 [2024-07-10 14:38:13.055142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.776 00:36:03.776 Latency(us) 00:36:03.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:03.776 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:03.776 nvme0n1 : 2.00 2540.79 317.60 0.00 0.00 6288.05 5752.60 13592.65 00:36:03.776 =================================================================================================================== 00:36:03.776 Total : 2540.79 317.60 0.00 0.00 6288.05 5752.60 13592.65 00:36:03.776 0 00:36:03.776 14:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:03.776 14:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:03.776 14:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:03.776 | .driver_specific 00:36:03.776 | .nvme_error 00:36:03.776 | .status_code 00:36:03.776 | .command_transient_transport_error' 00:36:03.776 14:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:04.046 14:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 164 > 0 )) 00:36:04.046 14:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1546171 00:36:04.046 14:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1546171 ']' 00:36:04.046 14:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1546171 00:36:04.046 14:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:04.046 14:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:04.046 14:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1546171 00:36:04.046 14:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:04.046 14:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:04.046 14:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1546171' 00:36:04.046 killing process with pid 1546171 00:36:04.046 14:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1546171 00:36:04.046 Received shutdown signal, test time was about 2.000000 seconds 00:36:04.046 00:36:04.046 Latency(us) 00:36:04.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:04.046 =================================================================================================================== 00:36:04.046 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:04.046 14:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1546171 00:36:04.988 14:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:04.988 14:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:04.988 14:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:04.988 14:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:04.988 14:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:04.988 14:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1546838 00:36:04.988 14:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:04.988 14:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1546838 /var/tmp/bperf.sock 00:36:04.988 14:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1546838 ']' 00:36:04.988 14:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:04.988 14:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:04.988 14:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:04.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:04.988 14:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:04.988 14:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:05.245 [2024-07-10 14:38:14.516066] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:36:05.245 [2024-07-10 14:38:14.516212] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1546838 ] 00:36:05.245 EAL: No free 2048 kB hugepages reported on node 1 00:36:05.245 [2024-07-10 14:38:14.647433] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:05.503 [2024-07-10 14:38:14.902779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:06.069 14:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:06.069 14:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:06.069 14:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:06.069 14:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:06.326 14:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:06.326 14:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.326 14:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:06.326 14:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.326 14:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:06.326 14:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:06.891 nvme0n1 00:36:06.891 14:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:06.891 14:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.891 14:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:06.891 14:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.891 14:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:06.891 14:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:06.891 Running I/O for 2 seconds... 00:36:06.891 [2024-07-10 14:38:16.284655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:36:06.891 [2024-07-10 14:38:16.286001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.891 [2024-07-10 14:38:16.286064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:06.891 [2024-07-10 14:38:16.300042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fac10 00:36:06.891 [2024-07-10 14:38:16.301358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.891 [2024-07-10 14:38:16.301405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:06.891 [2024-07-10 14:38:16.318258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb480 00:36:06.891 [2024-07-10 14:38:16.319860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.891 [2024-07-10 14:38:16.319905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:06.891 [2024-07-10 14:38:16.335235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:36:06.891 [2024-07-10 14:38:16.336960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.891 [2024-07-10 14:38:16.337005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:06.891 [2024-07-10 14:38:16.350791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:36:06.891 [2024-07-10 14:38:16.352527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.891 [2024-07-10 14:38:16.352567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:06.891 [2024-07-10 14:38:16.365853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7c50 00:36:06.891 [2024-07-10 14:38:16.366878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.891 [2024-07-10 14:38:16.366922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:07.149 [2024-07-10 14:38:16.383035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1ca0 00:36:07.149 [2024-07-10 14:38:16.383932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.149 [2024-07-10 14:38:16.383976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:07.149 [2024-07-10 14:38:16.399498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ddc00 00:36:07.149 [2024-07-10 14:38:16.400777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.149 [2024-07-10 14:38:16.400820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:07.149 [2024-07-10 14:38:16.417618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa3a0 00:36:07.149 [2024-07-10 14:38:16.419790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.149 [2024-07-10 14:38:16.419834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:07.149 [2024-07-10 14:38:16.432579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0bc0 00:36:07.149 [2024-07-10 14:38:16.434047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.149 [2024-07-10 14:38:16.434090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:07.149 [2024-07-10 14:38:16.448405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195efae0 00:36:07.149 [2024-07-10 14:38:16.449944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.149 [2024-07-10 14:38:16.449987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:07.149 [2024-07-10 14:38:16.465014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6020 00:36:07.149 [2024-07-10 14:38:16.466821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.149 [2024-07-10 14:38:16.466864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:07.149 [2024-07-10 14:38:16.480339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:36:07.149 [2024-07-10 14:38:16.482023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.149 [2024-07-10 14:38:16.482065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:07.149 [2024-07-10 14:38:16.495056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1ca0 00:36:07.149 [2024-07-10 14:38:16.496099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.149 [2024-07-10 14:38:16.496142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:07.149 [2024-07-10 14:38:16.511085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dfdc0 00:36:07.149 [2024-07-10 14:38:16.512016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.149 [2024-07-10 14:38:16.512059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:07.149 [2024-07-10 14:38:16.529685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ff3c8 00:36:07.149 [2024-07-10 14:38:16.531655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.149 [2024-07-10 14:38:16.531706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:07.149 [2024-07-10 14:38:16.544628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7da8 00:36:07.149 [2024-07-10 14:38:16.546123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.149 [2024-07-10 14:38:16.546171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:07.149 [2024-07-10 14:38:16.561588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:36:07.149 [2024-07-10 14:38:16.562888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.149 [2024-07-10 14:38:16.562929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:07.149 [2024-07-10 14:38:16.580442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:36:07.149 [2024-07-10 14:38:16.583135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.149 [2024-07-10 14:38:16.583180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:07.149 [2024-07-10 14:38:16.592287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:36:07.149 [2024-07-10 14:38:16.593371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.149 [2024-07-10 14:38:16.593413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:07.149 [2024-07-10 14:38:16.608103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f57b0 00:36:07.149 [2024-07-10 14:38:16.609142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.149 [2024-07-10 14:38:16.609193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:07.149 [2024-07-10 14:38:16.626402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:36:07.149 [2024-07-10 14:38:16.627815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.149 [2024-07-10 14:38:16.627858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:07.407 [2024-07-10 14:38:16.643714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ddc00 00:36:07.407 [2024-07-10 14:38:16.644914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.407 [2024-07-10 14:38:16.644959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:07.407 [2024-07-10 14:38:16.662705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eea00 00:36:07.407 [2024-07-10 14:38:16.665103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.407 [2024-07-10 14:38:16.665147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:07.407 [2024-07-10 14:38:16.678050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f81e0 00:36:07.407 [2024-07-10 14:38:16.679810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.407 [2024-07-10 14:38:16.679854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:07.407 [2024-07-10 14:38:16.693074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebfd0 00:36:07.407 [2024-07-10 14:38:16.695722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.407 [2024-07-10 14:38:16.695781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:07.407 [2024-07-10 14:38:16.708380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd640 00:36:07.407 [2024-07-10 14:38:16.709452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.407 [2024-07-10 14:38:16.709506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.407 [2024-07-10 14:38:16.725223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee190 00:36:07.407 [2024-07-10 14:38:16.726534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.408 [2024-07-10 14:38:16.726573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.408 [2024-07-10 14:38:16.740845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc128 00:36:07.408 [2024-07-10 14:38:16.742103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.408 [2024-07-10 14:38:16.742145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:07.408 [2024-07-10 14:38:16.759145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fcdd0 00:36:07.408 [2024-07-10 14:38:16.760754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.408 [2024-07-10 14:38:16.760813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.408 [2024-07-10 14:38:16.776256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3060 00:36:07.408 [2024-07-10 14:38:16.777991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.408 [2024-07-10 14:38:16.778035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.408 [2024-07-10 14:38:16.791874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7da8 00:36:07.408 [2024-07-10 14:38:16.793595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.408 [2024-07-10 14:38:16.793633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:07.408 [2024-07-10 14:38:16.807699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5378 00:36:07.408 [2024-07-10 14:38:16.808790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.408 [2024-07-10 14:38:16.808834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:07.408 [2024-07-10 14:38:16.824318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea680 00:36:07.408 [2024-07-10 14:38:16.825148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.408 [2024-07-10 14:38:16.825193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:07.408 [2024-07-10 14:38:16.843040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd640 00:36:07.408 [2024-07-10 14:38:16.845185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.408 [2024-07-10 14:38:16.845228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:07.408 [2024-07-10 14:38:16.860141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2d80 00:36:07.408 [2024-07-10 14:38:16.862536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.408 [2024-07-10 14:38:16.862575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:07.408 [2024-07-10 14:38:16.875359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8618 00:36:07.408 [2024-07-10 14:38:16.877073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.408 [2024-07-10 14:38:16.877116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:07.666 [2024-07-10 14:38:16.894358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1ca0 00:36:07.666 [2024-07-10 14:38:16.896953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.666 [2024-07-10 14:38:16.897007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:07.666 [2024-07-10 14:38:16.905991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb760 00:36:07.666 [2024-07-10 14:38:16.907091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.666 [2024-07-10 14:38:16.907134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:07.666 [2024-07-10 14:38:16.921453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:36:07.666 [2024-07-10 14:38:16.922354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.666 [2024-07-10 14:38:16.922393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:07.666 [2024-07-10 14:38:16.939511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2510 00:36:07.666 [2024-07-10 14:38:16.940785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.666 [2024-07-10 14:38:16.940828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:07.666 [2024-07-10 14:38:16.956295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:36:07.666 [2024-07-10 14:38:16.957782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.666 [2024-07-10 14:38:16.957825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:07.666 [2024-07-10 14:38:16.971629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3060 00:36:07.666 [2024-07-10 14:38:16.973070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.666 [2024-07-10 14:38:16.973113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:07.666 [2024-07-10 14:38:16.989836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1430 00:36:07.666 [2024-07-10 14:38:16.991641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.666 [2024-07-10 14:38:16.991680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:07.666 [2024-07-10 14:38:17.006149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9b30 00:36:07.666 [2024-07-10 14:38:17.007906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.667 [2024-07-10 14:38:17.007949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:07.667 [2024-07-10 14:38:17.020972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4298 00:36:07.667 [2024-07-10 14:38:17.023456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.667 [2024-07-10 14:38:17.023513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:07.667 [2024-07-10 14:38:17.036142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:36:07.667 [2024-07-10 14:38:17.037207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.667 [2024-07-10 14:38:17.037251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:07.667 [2024-07-10 14:38:17.053007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6b70 00:36:07.667 [2024-07-10 14:38:17.054287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.667 [2024-07-10 14:38:17.054329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:07.667 [2024-07-10 14:38:17.069420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fac10 00:36:07.667 [2024-07-10 14:38:17.070706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.667 [2024-07-10 14:38:17.070764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:07.667 [2024-07-10 14:38:17.085807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7970 00:36:07.667 [2024-07-10 14:38:17.087085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.667 [2024-07-10 14:38:17.087128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:07.667 [2024-07-10 14:38:17.101911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:36:07.667 [2024-07-10 14:38:17.103200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.667 [2024-07-10 14:38:17.103244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:07.667 [2024-07-10 14:38:17.118442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:36:07.667 [2024-07-10 14:38:17.119794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.667 [2024-07-10 14:38:17.119837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:07.667 [2024-07-10 14:38:17.134634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5220 00:36:07.667 [2024-07-10 14:38:17.135876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.667 [2024-07-10 14:38:17.135919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:07.925 [2024-07-10 14:38:17.151843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e84c0 00:36:07.925 [2024-07-10 14:38:17.153103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.925 [2024-07-10 14:38:17.153147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:07.925 [2024-07-10 14:38:17.168859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0bc0 00:36:07.925 [2024-07-10 14:38:17.170315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.926 [2024-07-10 14:38:17.170358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:07.926 [2024-07-10 14:38:17.184255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7da8 00:36:07.926 [2024-07-10 14:38:17.185802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.926 [2024-07-10 14:38:17.185844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.926 [2024-07-10 14:38:17.202322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2948 00:36:07.926 [2024-07-10 14:38:17.204050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.926 [2024-07-10 14:38:17.204094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:07.926 [2024-07-10 14:38:17.219180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:36:07.926 [2024-07-10 14:38:17.221104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.926 [2024-07-10 14:38:17.221146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:07.926 [2024-07-10 14:38:17.234604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:36:07.926 [2024-07-10 14:38:17.236460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.926 [2024-07-10 14:38:17.236515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.926 [2024-07-10 14:38:17.249845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fcdd0 00:36:07.926 [2024-07-10 14:38:17.251116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.926 [2024-07-10 14:38:17.251159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:07.926 [2024-07-10 14:38:17.266374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6b70 00:36:07.926 [2024-07-10 14:38:17.267439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.926 [2024-07-10 14:38:17.267497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.926 [2024-07-10 14:38:17.283031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e49b0 00:36:07.926 [2024-07-10 14:38:17.284521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.926 [2024-07-10 14:38:17.284559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.926 [2024-07-10 14:38:17.299210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:36:07.926 [2024-07-10 14:38:17.300699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.926 [2024-07-10 14:38:17.300754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.926 [2024-07-10 14:38:17.315522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8618 00:36:07.926 [2024-07-10 14:38:17.316986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.926 [2024-07-10 14:38:17.317039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.926 [2024-07-10 14:38:17.331576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4298 00:36:07.926 [2024-07-10 14:38:17.333058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.926 [2024-07-10 14:38:17.333102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.926 [2024-07-10 14:38:17.347879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195efae0 00:36:07.926 [2024-07-10 14:38:17.349386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.926 [2024-07-10 14:38:17.349437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.926 [2024-07-10 14:38:17.364138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e38d0 00:36:07.926 [2024-07-10 14:38:17.365716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.926 [2024-07-10 14:38:17.365773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.926 [2024-07-10 14:38:17.380572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2d80 00:36:07.926 [2024-07-10 14:38:17.382029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.926 [2024-07-10 14:38:17.382072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.926 [2024-07-10 14:38:17.396688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc128 00:36:07.926 [2024-07-10 14:38:17.398185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.926 [2024-07-10 14:38:17.398227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.184 [2024-07-10 14:38:17.414035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:36:08.184 [2024-07-10 14:38:17.415493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.184 [2024-07-10 14:38:17.415532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.184 [2024-07-10 14:38:17.430320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ff3c8 00:36:08.184 [2024-07-10 14:38:17.431816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.184 [2024-07-10 14:38:17.431859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.184 [2024-07-10 14:38:17.446677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dece0 00:36:08.184 [2024-07-10 14:38:17.448165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.184 [2024-07-10 14:38:17.448212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.184 [2024-07-10 14:38:17.465116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7c50 00:36:08.184 [2024-07-10 14:38:17.467526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.184 [2024-07-10 14:38:17.467566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.184 [2024-07-10 14:38:17.480386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5be8 00:36:08.184 [2024-07-10 14:38:17.482075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.184 [2024-07-10 14:38:17.482120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:08.184 [2024-07-10 14:38:17.495345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5ec8 00:36:08.184 [2024-07-10 14:38:17.497935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.184 [2024-07-10 14:38:17.497979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:08.184 [2024-07-10 14:38:17.510859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:36:08.184 [2024-07-10 14:38:17.511938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.184 [2024-07-10 14:38:17.511986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:08.184 [2024-07-10 14:38:17.527669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2948 00:36:08.184 [2024-07-10 14:38:17.528949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.184 [2024-07-10 14:38:17.528992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:08.184 [2024-07-10 14:38:17.543132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:36:08.184 [2024-07-10 14:38:17.544371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.184 [2024-07-10 14:38:17.544415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:08.184 [2024-07-10 14:38:17.561261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9b30 00:36:08.184 [2024-07-10 14:38:17.562870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.184 [2024-07-10 14:38:17.562915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:08.184 [2024-07-10 14:38:17.576662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1f80 00:36:08.184 [2024-07-10 14:38:17.578137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.184 [2024-07-10 14:38:17.578178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.184 [2024-07-10 14:38:17.595059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e84c0 00:36:08.184 [2024-07-10 14:38:17.596816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.185 [2024-07-10 14:38:17.596869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:08.185 [2024-07-10 14:38:17.612256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8a50 00:36:08.185 [2024-07-10 14:38:17.614221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.185 [2024-07-10 14:38:17.614265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:08.185 [2024-07-10 14:38:17.627720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df550 00:36:08.185 [2024-07-10 14:38:17.629573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.185 [2024-07-10 14:38:17.629612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.185 [2024-07-10 14:38:17.642987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5378 00:36:08.185 [2024-07-10 14:38:17.644235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.185 [2024-07-10 14:38:17.644279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:08.185 [2024-07-10 14:38:17.659567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f20d8 00:36:08.185 [2024-07-10 14:38:17.660664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.185 [2024-07-10 14:38:17.660707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.443 [2024-07-10 14:38:17.677625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd208 00:36:08.443 [2024-07-10 14:38:17.679013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.443 [2024-07-10 14:38:17.679057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:08.443 [2024-07-10 14:38:17.694293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6fa8 00:36:08.443 [2024-07-10 14:38:17.696034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.443 [2024-07-10 14:38:17.696076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:08.443 [2024-07-10 14:38:17.710638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ff3c8 00:36:08.443 [2024-07-10 14:38:17.712320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.443 [2024-07-10 14:38:17.712363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:08.443 [2024-07-10 14:38:17.726989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8a50 00:36:08.443 [2024-07-10 14:38:17.728795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.443 [2024-07-10 14:38:17.728838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:08.443 [2024-07-10 14:38:17.743240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:36:08.443 [2024-07-10 14:38:17.744954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.443 [2024-07-10 14:38:17.744997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:08.443 [2024-07-10 14:38:17.759474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0bc0 00:36:08.443 [2024-07-10 14:38:17.761145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.443 [2024-07-10 14:38:17.761188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:08.443 [2024-07-10 14:38:17.775848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:36:08.443 [2024-07-10 14:38:17.777585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.443 [2024-07-10 14:38:17.777624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:08.443 [2024-07-10 14:38:17.792165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:36:08.443 [2024-07-10 14:38:17.793903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.443 [2024-07-10 14:38:17.793946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:08.443 [2024-07-10 14:38:17.808360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:36:08.443 [2024-07-10 14:38:17.810040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.443 [2024-07-10 14:38:17.810083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:08.443 [2024-07-10 14:38:17.824757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2d80 00:36:08.443 [2024-07-10 14:38:17.826438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.443 [2024-07-10 14:38:17.826480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:08.443 [2024-07-10 14:38:17.843088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3498 00:36:08.443 [2024-07-10 14:38:17.845671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.443 [2024-07-10 14:38:17.845710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:08.443 [2024-07-10 14:38:17.854668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ff3c8 00:36:08.443 [2024-07-10 14:38:17.855685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.443 [2024-07-10 14:38:17.855753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:08.443 [2024-07-10 14:38:17.870978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dfdc0 00:36:08.443 [2024-07-10 14:38:17.871968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.443 [2024-07-10 14:38:17.872010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:08.443 [2024-07-10 14:38:17.888803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3e60 00:36:08.443 [2024-07-10 14:38:17.891443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.443 [2024-07-10 14:38:17.891498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:08.443 [2024-07-10 14:38:17.903788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f96f8 00:36:08.443 [2024-07-10 14:38:17.905030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.443 [2024-07-10 14:38:17.905073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:08.444 [2024-07-10 14:38:17.920366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7da8 00:36:08.444 [2024-07-10 14:38:17.922065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.444 [2024-07-10 14:38:17.922112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:08.700 [2024-07-10 14:38:17.936286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed4e8 00:36:08.700 [2024-07-10 14:38:17.937823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.700 [2024-07-10 14:38:17.937866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.700 [2024-07-10 14:38:17.954206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:36:08.700 [2024-07-10 14:38:17.955880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.700 [2024-07-10 14:38:17.955922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:08.700 [2024-07-10 14:38:17.970515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:36:08.700 [2024-07-10 14:38:17.972345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.700 [2024-07-10 14:38:17.972388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:08.700 [2024-07-10 14:38:17.985328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:36:08.700 [2024-07-10 14:38:17.987184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.700 [2024-07-10 14:38:17.987226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.700 [2024-07-10 14:38:18.000005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5220 00:36:08.700 [2024-07-10 14:38:18.001246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.700 [2024-07-10 14:38:18.001289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:08.700 [2024-07-10 14:38:18.016051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:36:08.700 [2024-07-10 14:38:18.017210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.700 [2024-07-10 14:38:18.017254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.700 [2024-07-10 14:38:18.034372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6890 00:36:08.700 [2024-07-10 14:38:18.036676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.700 [2024-07-10 14:38:18.036714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.700 [2024-07-10 14:38:18.049060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8d30 00:36:08.700 [2024-07-10 14:38:18.050786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.700 [2024-07-10 14:38:18.050828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:08.700 [2024-07-10 14:38:18.063509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:36:08.700 [2024-07-10 14:38:18.065979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.700 [2024-07-10 14:38:18.066022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:08.700 [2024-07-10 14:38:18.078117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:36:08.700 [2024-07-10 14:38:18.079105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.700 [2024-07-10 14:38:18.079148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:08.700 [2024-07-10 14:38:18.094499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f92c0 00:36:08.700 [2024-07-10 14:38:18.095543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.700 [2024-07-10 14:38:18.095581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:08.700 [2024-07-10 14:38:18.109142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7970 00:36:08.700 [2024-07-10 14:38:18.110372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.700 [2024-07-10 14:38:18.110417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:08.700 [2024-07-10 14:38:18.126833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eea00 00:36:08.701 [2024-07-10 14:38:18.128281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.701 [2024-07-10 14:38:18.128324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:08.701 [2024-07-10 14:38:18.143238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ff3c8 00:36:08.701 [2024-07-10 14:38:18.144900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.701 [2024-07-10 14:38:18.144943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:08.701 [2024-07-10 14:38:18.159226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dfdc0 00:36:08.701 [2024-07-10 14:38:18.160902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.701 [2024-07-10 14:38:18.160944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:08.701 [2024-07-10 14:38:18.175022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7c50 00:36:08.701 [2024-07-10 14:38:18.176703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.701 [2024-07-10 14:38:18.176760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:08.958 [2024-07-10 14:38:18.190641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9e10 00:36:08.958 [2024-07-10 14:38:18.192287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.958 [2024-07-10 14:38:18.192330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:08.958 [2024-07-10 14:38:18.208294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:36:08.958 [2024-07-10 14:38:18.210225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.958 [2024-07-10 14:38:18.210268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:08.958 [2024-07-10 14:38:18.223157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4298 00:36:08.958 [2024-07-10 14:38:18.225006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.958 [2024-07-10 14:38:18.225048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.958 [2024-07-10 14:38:18.237936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dece0 00:36:08.958 [2024-07-10 14:38:18.239170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.958 [2024-07-10 14:38:18.239213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:08.958 [2024-07-10 14:38:18.253980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed4e8 00:36:08.958 [2024-07-10 14:38:18.255099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.958 [2024-07-10 14:38:18.255143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.958 [2024-07-10 14:38:18.270349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:36:08.958 [2024-07-10 14:38:18.271796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.958 [2024-07-10 14:38:18.271839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.958 00:36:08.958 Latency(us) 00:36:08.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:08.958 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:08.958 nvme0n1 : 2.01 15658.83 61.17 0.00 0.00 8157.13 3446.71 20291.89 00:36:08.958 =================================================================================================================== 00:36:08.958 Total : 15658.83 61.17 0.00 0.00 8157.13 3446.71 20291.89 00:36:08.958 0 00:36:08.958 14:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:08.958 14:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:08.958 14:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:08.958 | .driver_specific 00:36:08.958 | .nvme_error 00:36:08.958 | .status_code 00:36:08.958 | .command_transient_transport_error' 00:36:08.958 14:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:09.216 14:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 123 > 0 )) 00:36:09.216 14:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1546838 00:36:09.216 14:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1546838 ']' 00:36:09.216 14:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1546838 00:36:09.216 14:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:09.216 14:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:09.216 14:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1546838 00:36:09.216 14:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:09.216 14:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:09.216 14:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1546838' 00:36:09.216 killing process with pid 1546838 00:36:09.216 14:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1546838 00:36:09.217 Received shutdown signal, test time was about 2.000000 seconds 00:36:09.217 00:36:09.217 Latency(us) 00:36:09.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:09.217 =================================================================================================================== 00:36:09.217 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:09.217 14:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1546838 00:36:10.149 14:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:10.149 14:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:10.149 14:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:10.149 14:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:10.149 14:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:10.149 14:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1547384 00:36:10.149 14:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:10.149 14:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1547384 /var/tmp/bperf.sock 00:36:10.149 14:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1547384 ']' 00:36:10.149 14:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:10.149 14:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:10.149 14:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:10.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:10.149 14:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:10.149 14:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:10.407 [2024-07-10 14:38:19.698874] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:36:10.407 [2024-07-10 14:38:19.699027] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1547384 ] 00:36:10.407 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:10.407 Zero copy mechanism will not be used. 00:36:10.407 EAL: No free 2048 kB hugepages reported on node 1 00:36:10.407 [2024-07-10 14:38:19.827395] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:10.665 [2024-07-10 14:38:20.079539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:11.230 14:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:11.231 14:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:11.231 14:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:11.231 14:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:11.487 14:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:11.487 14:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.487 14:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:11.487 14:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.487 14:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:11.487 14:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:12.053 nvme0n1 00:36:12.053 14:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:12.053 14:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.053 14:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:12.053 14:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.053 14:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:12.053 14:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:12.053 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:12.053 Zero copy mechanism will not be used. 00:36:12.053 Running I/O for 2 seconds... 00:36:12.053 [2024-07-10 14:38:21.489680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.053 [2024-07-10 14:38:21.490176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.053 [2024-07-10 14:38:21.490234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:12.053 [2024-07-10 14:38:21.505233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.053 [2024-07-10 14:38:21.505738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.053 [2024-07-10 14:38:21.505797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:12.053 [2024-07-10 14:38:21.520185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.053 [2024-07-10 14:38:21.520660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.053 [2024-07-10 14:38:21.520724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:12.310 [2024-07-10 14:38:21.534449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.310 [2024-07-10 14:38:21.534961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.310 [2024-07-10 14:38:21.535008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.310 [2024-07-10 14:38:21.549233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.310 [2024-07-10 14:38:21.549732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.311 [2024-07-10 14:38:21.549778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:12.311 [2024-07-10 14:38:21.565348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.311 [2024-07-10 14:38:21.565839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.311 [2024-07-10 14:38:21.565894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:12.311 [2024-07-10 14:38:21.581176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.311 [2024-07-10 14:38:21.581672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.311 [2024-07-10 14:38:21.581724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:12.311 [2024-07-10 14:38:21.598106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.311 [2024-07-10 14:38:21.598571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.311 [2024-07-10 14:38:21.598627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.311 [2024-07-10 14:38:21.613860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.311 [2024-07-10 14:38:21.614326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.311 [2024-07-10 14:38:21.614372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:12.311 [2024-07-10 14:38:21.631063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.311 [2024-07-10 14:38:21.631374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.311 [2024-07-10 14:38:21.631439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:12.311 [2024-07-10 14:38:21.645982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.311 [2024-07-10 14:38:21.646239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.311 [2024-07-10 14:38:21.646282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:12.311 [2024-07-10 14:38:21.659720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.311 [2024-07-10 14:38:21.660202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.311 [2024-07-10 14:38:21.660256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.311 [2024-07-10 14:38:21.674122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.311 [2024-07-10 14:38:21.674604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.311 [2024-07-10 14:38:21.674641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:12.311 [2024-07-10 14:38:21.688194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.311 [2024-07-10 14:38:21.688691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.311 [2024-07-10 14:38:21.688729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:12.311 [2024-07-10 14:38:21.703390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.311 [2024-07-10 14:38:21.703841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.311 [2024-07-10 14:38:21.703887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:12.311 [2024-07-10 14:38:21.716271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.311 [2024-07-10 14:38:21.716761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.311 [2024-07-10 14:38:21.716806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.311 [2024-07-10 14:38:21.729055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.311 [2024-07-10 14:38:21.729487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.311 [2024-07-10 14:38:21.729524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:12.311 [2024-07-10 14:38:21.743845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.311 [2024-07-10 14:38:21.744127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.311 [2024-07-10 14:38:21.744168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:12.311 [2024-07-10 14:38:21.756349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.311 [2024-07-10 14:38:21.756892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.311 [2024-07-10 14:38:21.756939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:12.311 [2024-07-10 14:38:21.769593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.311 [2024-07-10 14:38:21.770197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.311 [2024-07-10 14:38:21.770236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.311 [2024-07-10 14:38:21.782883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.311 [2024-07-10 14:38:21.783340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.311 [2024-07-10 14:38:21.783378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:12.569 [2024-07-10 14:38:21.795920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.569 [2024-07-10 14:38:21.796485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.569 [2024-07-10 14:38:21.796526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:12.569 [2024-07-10 14:38:21.809232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.569 [2024-07-10 14:38:21.809771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.569 [2024-07-10 14:38:21.809810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:12.569 [2024-07-10 14:38:21.822442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.569 [2024-07-10 14:38:21.822971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.569 [2024-07-10 14:38:21.823009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.569 [2024-07-10 14:38:21.835219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.569 [2024-07-10 14:38:21.835791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.569 [2024-07-10 14:38:21.835845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:12.569 [2024-07-10 14:38:21.848091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.569 [2024-07-10 14:38:21.848666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.569 [2024-07-10 14:38:21.848722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:12.569 [2024-07-10 14:38:21.862519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.569 [2024-07-10 14:38:21.863045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.569 [2024-07-10 14:38:21.863084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:12.569 [2024-07-10 14:38:21.875646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.569 [2024-07-10 14:38:21.876229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.569 [2024-07-10 14:38:21.876278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.569 [2024-07-10 14:38:21.889724] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.569 [2024-07-10 14:38:21.890227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.569 [2024-07-10 14:38:21.890265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:12.569 [2024-07-10 14:38:21.904073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.569 [2024-07-10 14:38:21.904656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.569 [2024-07-10 14:38:21.904695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:12.569 [2024-07-10 14:38:21.918197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.569 [2024-07-10 14:38:21.918682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.569 [2024-07-10 14:38:21.918736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:12.569 [2024-07-10 14:38:21.931511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.569 [2024-07-10 14:38:21.932060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.569 [2024-07-10 14:38:21.932098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.569 [2024-07-10 14:38:21.944730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.569 [2024-07-10 14:38:21.945275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.569 [2024-07-10 14:38:21.945323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:12.570 [2024-07-10 14:38:21.957760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.570 [2024-07-10 14:38:21.958357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.570 [2024-07-10 14:38:21.958413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:12.570 [2024-07-10 14:38:21.971417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.570 [2024-07-10 14:38:21.971904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.570 [2024-07-10 14:38:21.971957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:12.570 [2024-07-10 14:38:21.985117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.570 [2024-07-10 14:38:21.985620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.570 [2024-07-10 14:38:21.985693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.570 [2024-07-10 14:38:21.997476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.570 [2024-07-10 14:38:21.997864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.570 [2024-07-10 14:38:21.997905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:12.570 [2024-07-10 14:38:22.009935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.570 [2024-07-10 14:38:22.010525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.570 [2024-07-10 14:38:22.010565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:12.570 [2024-07-10 14:38:22.023538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.570 [2024-07-10 14:38:22.024042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.570 [2024-07-10 14:38:22.024082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:12.570 [2024-07-10 14:38:22.036214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.570 [2024-07-10 14:38:22.036693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.570 [2024-07-10 14:38:22.036744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.829 [2024-07-10 14:38:22.050181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.829 [2024-07-10 14:38:22.050765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.829 [2024-07-10 14:38:22.050817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:12.829 [2024-07-10 14:38:22.062827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.829 [2024-07-10 14:38:22.063570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.829 [2024-07-10 14:38:22.063611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:12.829 [2024-07-10 14:38:22.077015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.829 [2024-07-10 14:38:22.077694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.829 [2024-07-10 14:38:22.077749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:12.829 [2024-07-10 14:38:22.090640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.829 [2024-07-10 14:38:22.091130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.829 [2024-07-10 14:38:22.091180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.829 [2024-07-10 14:38:22.104487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.829 [2024-07-10 14:38:22.104974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.829 [2024-07-10 14:38:22.105013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:12.829 [2024-07-10 14:38:22.117604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.829 [2024-07-10 14:38:22.118105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.829 [2024-07-10 14:38:22.118156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:12.829 [2024-07-10 14:38:22.131165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.829 [2024-07-10 14:38:22.131684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.829 [2024-07-10 14:38:22.131751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:12.829 [2024-07-10 14:38:22.144467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.829 [2024-07-10 14:38:22.145137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.829 [2024-07-10 14:38:22.145187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.829 [2024-07-10 14:38:22.158532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.829 [2024-07-10 14:38:22.159110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.829 [2024-07-10 14:38:22.159174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:12.829 [2024-07-10 14:38:22.171366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.829 [2024-07-10 14:38:22.172031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.829 [2024-07-10 14:38:22.172068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:12.829 [2024-07-10 14:38:22.185179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.829 [2024-07-10 14:38:22.185777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.829 [2024-07-10 14:38:22.185828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:12.829 [2024-07-10 14:38:22.198544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.829 [2024-07-10 14:38:22.199069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.829 [2024-07-10 14:38:22.199120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.829 [2024-07-10 14:38:22.212101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.829 [2024-07-10 14:38:22.212546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.829 [2024-07-10 14:38:22.212595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:12.829 [2024-07-10 14:38:22.225656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.829 [2024-07-10 14:38:22.226278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.829 [2024-07-10 14:38:22.226326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:12.829 [2024-07-10 14:38:22.238905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.829 [2024-07-10 14:38:22.239578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.829 [2024-07-10 14:38:22.239618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:12.829 [2024-07-10 14:38:22.251875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.829 [2024-07-10 14:38:22.252554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.829 [2024-07-10 14:38:22.252594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.829 [2024-07-10 14:38:22.265500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.829 [2024-07-10 14:38:22.266105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.829 [2024-07-10 14:38:22.266144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:12.829 [2024-07-10 14:38:22.278246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.829 [2024-07-10 14:38:22.278637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.829 [2024-07-10 14:38:22.278681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:12.829 [2024-07-10 14:38:22.290327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.829 [2024-07-10 14:38:22.290919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.829 [2024-07-10 14:38:22.290958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:12.829 [2024-07-10 14:38:22.303534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:12.829 [2024-07-10 14:38:22.304171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.829 [2024-07-10 14:38:22.304218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.087 [2024-07-10 14:38:22.317222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.087 [2024-07-10 14:38:22.317890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.087 [2024-07-10 14:38:22.317928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:13.087 [2024-07-10 14:38:22.330905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.087 [2024-07-10 14:38:22.331337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.087 [2024-07-10 14:38:22.331402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:13.087 [2024-07-10 14:38:22.343557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.087 [2024-07-10 14:38:22.344134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.087 [2024-07-10 14:38:22.344187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:13.087 [2024-07-10 14:38:22.356968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.087 [2024-07-10 14:38:22.357515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.087 [2024-07-10 14:38:22.357568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.087 [2024-07-10 14:38:22.369052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.087 [2024-07-10 14:38:22.369628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.087 [2024-07-10 14:38:22.369667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:13.087 [2024-07-10 14:38:22.382758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.087 [2024-07-10 14:38:22.383360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.087 [2024-07-10 14:38:22.383422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:13.087 [2024-07-10 14:38:22.395084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.087 [2024-07-10 14:38:22.395556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.087 [2024-07-10 14:38:22.395594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:13.087 [2024-07-10 14:38:22.408981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.087 [2024-07-10 14:38:22.409507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.088 [2024-07-10 14:38:22.409547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.088 [2024-07-10 14:38:22.422806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.088 [2024-07-10 14:38:22.423356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.088 [2024-07-10 14:38:22.423420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:13.088 [2024-07-10 14:38:22.436321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.088 [2024-07-10 14:38:22.436886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.088 [2024-07-10 14:38:22.436938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:13.088 [2024-07-10 14:38:22.448239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.088 [2024-07-10 14:38:22.448698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.088 [2024-07-10 14:38:22.448757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:13.088 [2024-07-10 14:38:22.461481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.088 [2024-07-10 14:38:22.462121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.088 [2024-07-10 14:38:22.462158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.088 [2024-07-10 14:38:22.475331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.088 [2024-07-10 14:38:22.475903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.088 [2024-07-10 14:38:22.475943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:13.088 [2024-07-10 14:38:22.489006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.088 [2024-07-10 14:38:22.489633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.088 [2024-07-10 14:38:22.489673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:13.088 [2024-07-10 14:38:22.501790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.088 [2024-07-10 14:38:22.502384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.088 [2024-07-10 14:38:22.502453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:13.088 [2024-07-10 14:38:22.513042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.088 [2024-07-10 14:38:22.513609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.088 [2024-07-10 14:38:22.513649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.088 [2024-07-10 14:38:22.526194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.088 [2024-07-10 14:38:22.526731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.088 [2024-07-10 14:38:22.526797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:13.088 [2024-07-10 14:38:22.538911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.088 [2024-07-10 14:38:22.539398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.088 [2024-07-10 14:38:22.539462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:13.088 [2024-07-10 14:38:22.551207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.088 [2024-07-10 14:38:22.551817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.088 [2024-07-10 14:38:22.551856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:13.088 [2024-07-10 14:38:22.563498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.088 [2024-07-10 14:38:22.564070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.088 [2024-07-10 14:38:22.564126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.347 [2024-07-10 14:38:22.576556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.347 [2024-07-10 14:38:22.577068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.347 [2024-07-10 14:38:22.577107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:13.347 [2024-07-10 14:38:22.589959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.347 [2024-07-10 14:38:22.590571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.347 [2024-07-10 14:38:22.590612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:13.347 [2024-07-10 14:38:22.601540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.347 [2024-07-10 14:38:22.602162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.347 [2024-07-10 14:38:22.602201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:13.347 [2024-07-10 14:38:22.614089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.347 [2024-07-10 14:38:22.614408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.347 [2024-07-10 14:38:22.614463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.347 [2024-07-10 14:38:22.628240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.347 [2024-07-10 14:38:22.628754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.347 [2024-07-10 14:38:22.628795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:13.347 [2024-07-10 14:38:22.641624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.347 [2024-07-10 14:38:22.642066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.347 [2024-07-10 14:38:22.642123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:13.347 [2024-07-10 14:38:22.654295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.347 [2024-07-10 14:38:22.654728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.347 [2024-07-10 14:38:22.654786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:13.347 [2024-07-10 14:38:22.667006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.347 [2024-07-10 14:38:22.667628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.347 [2024-07-10 14:38:22.667682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.347 [2024-07-10 14:38:22.681234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.347 [2024-07-10 14:38:22.681755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.347 [2024-07-10 14:38:22.681809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:13.347 [2024-07-10 14:38:22.694257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.347 [2024-07-10 14:38:22.694803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.347 [2024-07-10 14:38:22.694865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:13.347 [2024-07-10 14:38:22.707322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.347 [2024-07-10 14:38:22.707824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.347 [2024-07-10 14:38:22.707863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:13.347 [2024-07-10 14:38:22.720889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.347 [2024-07-10 14:38:22.721408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.347 [2024-07-10 14:38:22.721471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.347 [2024-07-10 14:38:22.733456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.347 [2024-07-10 14:38:22.734007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.347 [2024-07-10 14:38:22.734046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:13.347 [2024-07-10 14:38:22.746916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.347 [2024-07-10 14:38:22.747408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.347 [2024-07-10 14:38:22.747470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:13.347 [2024-07-10 14:38:22.759162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.347 [2024-07-10 14:38:22.759635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.347 [2024-07-10 14:38:22.759677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:13.347 [2024-07-10 14:38:22.772141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.347 [2024-07-10 14:38:22.772683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.347 [2024-07-10 14:38:22.772723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.347 [2024-07-10 14:38:22.785349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.347 [2024-07-10 14:38:22.785962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.347 [2024-07-10 14:38:22.786016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:13.348 [2024-07-10 14:38:22.796969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.348 [2024-07-10 14:38:22.797422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.348 [2024-07-10 14:38:22.797482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:13.348 [2024-07-10 14:38:22.809490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.348 [2024-07-10 14:38:22.809982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.348 [2024-07-10 14:38:22.810021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:13.348 [2024-07-10 14:38:22.821859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.348 [2024-07-10 14:38:22.822320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.348 [2024-07-10 14:38:22.822360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.607 [2024-07-10 14:38:22.834188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.607 [2024-07-10 14:38:22.834718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.607 [2024-07-10 14:38:22.834761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:13.607 [2024-07-10 14:38:22.847967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.607 [2024-07-10 14:38:22.848393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.607 [2024-07-10 14:38:22.848453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:13.607 [2024-07-10 14:38:22.860574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.607 [2024-07-10 14:38:22.861114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.607 [2024-07-10 14:38:22.861159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:13.607 [2024-07-10 14:38:22.873778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.607 [2024-07-10 14:38:22.874247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.607 [2024-07-10 14:38:22.874301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.607 [2024-07-10 14:38:22.886281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.607 [2024-07-10 14:38:22.886820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.607 [2024-07-10 14:38:22.886860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:13.607 [2024-07-10 14:38:22.899313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.607 [2024-07-10 14:38:22.899715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.607 [2024-07-10 14:38:22.899755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:13.607 [2024-07-10 14:38:22.911922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.607 [2024-07-10 14:38:22.912515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.607 [2024-07-10 14:38:22.912569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:13.607 [2024-07-10 14:38:22.924804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.607 [2024-07-10 14:38:22.925256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.607 [2024-07-10 14:38:22.925312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.607 [2024-07-10 14:38:22.937634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.607 [2024-07-10 14:38:22.938215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.607 [2024-07-10 14:38:22.938269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:13.607 [2024-07-10 14:38:22.949607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.607 [2024-07-10 14:38:22.950006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.607 [2024-07-10 14:38:22.950061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:13.607 [2024-07-10 14:38:22.961419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.607 [2024-07-10 14:38:22.961946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.607 [2024-07-10 14:38:22.962001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:13.607 [2024-07-10 14:38:22.974909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.607 [2024-07-10 14:38:22.975520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.607 [2024-07-10 14:38:22.975560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.607 [2024-07-10 14:38:22.988495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.607 [2024-07-10 14:38:22.989001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.607 [2024-07-10 14:38:22.989057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:13.607 [2024-07-10 14:38:23.001042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.607 [2024-07-10 14:38:23.001511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.607 [2024-07-10 14:38:23.001566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:13.607 [2024-07-10 14:38:23.014678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.607 [2024-07-10 14:38:23.015077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.607 [2024-07-10 14:38:23.015118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:13.607 [2024-07-10 14:38:23.028181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.607 [2024-07-10 14:38:23.028769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.607 [2024-07-10 14:38:23.028826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.607 [2024-07-10 14:38:23.040911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.607 [2024-07-10 14:38:23.041353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.607 [2024-07-10 14:38:23.041391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:13.607 [2024-07-10 14:38:23.054285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.607 [2024-07-10 14:38:23.054690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.607 [2024-07-10 14:38:23.054733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:13.607 [2024-07-10 14:38:23.067065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.607 [2024-07-10 14:38:23.067471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.607 [2024-07-10 14:38:23.067511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:13.607 [2024-07-10 14:38:23.080029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.607 [2024-07-10 14:38:23.080594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.607 [2024-07-10 14:38:23.080635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.865 [2024-07-10 14:38:23.093582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.865 [2024-07-10 14:38:23.094085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.865 [2024-07-10 14:38:23.094144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:13.865 [2024-07-10 14:38:23.105760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.865 [2024-07-10 14:38:23.106197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.865 [2024-07-10 14:38:23.106239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:13.865 [2024-07-10 14:38:23.118709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.865 [2024-07-10 14:38:23.119221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.865 [2024-07-10 14:38:23.119262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:13.865 [2024-07-10 14:38:23.132628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.865 [2024-07-10 14:38:23.133071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.865 [2024-07-10 14:38:23.133113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.865 [2024-07-10 14:38:23.145234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.865 [2024-07-10 14:38:23.145729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.865 [2024-07-10 14:38:23.145769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:13.865 [2024-07-10 14:38:23.158403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.865 [2024-07-10 14:38:23.158920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.865 [2024-07-10 14:38:23.158960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:13.865 [2024-07-10 14:38:23.170162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.865 [2024-07-10 14:38:23.170685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.865 [2024-07-10 14:38:23.170724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:13.865 [2024-07-10 14:38:23.182518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.865 [2024-07-10 14:38:23.183028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.865 [2024-07-10 14:38:23.183082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.865 [2024-07-10 14:38:23.194347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.865 [2024-07-10 14:38:23.194843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.865 [2024-07-10 14:38:23.194884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:13.865 [2024-07-10 14:38:23.208196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.865 [2024-07-10 14:38:23.208679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.865 [2024-07-10 14:38:23.208743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:13.865 [2024-07-10 14:38:23.220828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.865 [2024-07-10 14:38:23.221222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.865 [2024-07-10 14:38:23.221278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:13.865 [2024-07-10 14:38:23.234144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.866 [2024-07-10 14:38:23.234697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.866 [2024-07-10 14:38:23.234739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.866 [2024-07-10 14:38:23.245738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.866 [2024-07-10 14:38:23.246181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.866 [2024-07-10 14:38:23.246219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:13.866 [2024-07-10 14:38:23.259351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.866 [2024-07-10 14:38:23.259821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.866 [2024-07-10 14:38:23.259876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:13.866 [2024-07-10 14:38:23.272546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.866 [2024-07-10 14:38:23.273065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.866 [2024-07-10 14:38:23.273104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:13.866 [2024-07-10 14:38:23.284981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.866 [2024-07-10 14:38:23.285490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.866 [2024-07-10 14:38:23.285530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.866 [2024-07-10 14:38:23.296439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.866 [2024-07-10 14:38:23.296861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.866 [2024-07-10 14:38:23.296900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:13.866 [2024-07-10 14:38:23.309500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.866 [2024-07-10 14:38:23.310166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.866 [2024-07-10 14:38:23.310219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:13.866 [2024-07-10 14:38:23.324476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.866 [2024-07-10 14:38:23.325194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.866 [2024-07-10 14:38:23.325247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:13.866 [2024-07-10 14:38:23.338458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:13.866 [2024-07-10 14:38:23.338877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.866 [2024-07-10 14:38:23.338916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:14.124 [2024-07-10 14:38:23.351117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:14.124 [2024-07-10 14:38:23.351556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.124 [2024-07-10 14:38:23.351597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:14.124 [2024-07-10 14:38:23.363539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:14.124 [2024-07-10 14:38:23.363941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.124 [2024-07-10 14:38:23.363981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:14.124 [2024-07-10 14:38:23.376780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:14.124 [2024-07-10 14:38:23.377306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.124 [2024-07-10 14:38:23.377343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:14.124 [2024-07-10 14:38:23.389565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:14.124 [2024-07-10 14:38:23.390081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.124 [2024-07-10 14:38:23.390135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:14.124 [2024-07-10 14:38:23.402440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:14.124 [2024-07-10 14:38:23.402976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.124 [2024-07-10 14:38:23.403028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:14.124 [2024-07-10 14:38:23.415232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:14.124 [2024-07-10 14:38:23.415794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.124 [2024-07-10 14:38:23.415853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:14.124 [2024-07-10 14:38:23.427246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:14.124 [2024-07-10 14:38:23.427746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.124 [2024-07-10 14:38:23.427808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:14.124 [2024-07-10 14:38:23.439307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:14.124 [2024-07-10 14:38:23.439822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.124 [2024-07-10 14:38:23.439862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:14.124 [2024-07-10 14:38:23.452874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:14.124 [2024-07-10 14:38:23.453474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.124 [2024-07-10 14:38:23.453528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:14.124 [2024-07-10 14:38:23.466536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:14.124 [2024-07-10 14:38:23.466972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.124 [2024-07-10 14:38:23.467030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:14.124 00:36:14.124 Latency(us) 00:36:14.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:14.124 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:14.124 nvme0n1 : 2.01 2328.13 291.02 0.00 0.00 6855.02 4975.88 18738.44 00:36:14.124 =================================================================================================================== 00:36:14.124 Total : 2328.13 291.02 0.00 0.00 6855.02 4975.88 18738.44 00:36:14.124 0 00:36:14.124 14:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:14.124 14:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:14.124 14:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:14.124 14:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:14.124 | .driver_specific 00:36:14.124 | .nvme_error 00:36:14.124 | .status_code 00:36:14.124 | .command_transient_transport_error' 00:36:14.382 14:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 150 > 0 )) 00:36:14.382 14:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1547384 00:36:14.382 14:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1547384 ']' 00:36:14.382 14:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1547384 00:36:14.382 14:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:14.382 14:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:14.382 14:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1547384 00:36:14.382 14:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:14.382 14:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:14.382 14:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1547384' 00:36:14.382 killing process with pid 1547384 00:36:14.382 14:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1547384 00:36:14.382 Received shutdown signal, test time was about 2.000000 seconds 00:36:14.382 00:36:14.382 Latency(us) 00:36:14.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:14.382 =================================================================================================================== 00:36:14.382 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:14.382 14:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1547384 00:36:15.754 14:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1545476 00:36:15.754 14:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1545476 ']' 00:36:15.754 14:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1545476 00:36:15.754 14:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:15.754 14:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:15.754 14:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1545476 00:36:15.754 14:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:15.754 14:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:15.754 14:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1545476' 00:36:15.754 killing process with pid 1545476 00:36:15.754 14:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1545476 00:36:15.754 14:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1545476 00:36:16.687 00:36:16.687 real 0m23.461s 00:36:16.687 user 0m45.408s 00:36:16.687 sys 0m4.617s 00:36:16.687 14:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:16.687 14:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:16.687 ************************************ 00:36:16.687 END TEST nvmf_digest_error 00:36:16.687 ************************************ 00:36:16.687 14:38:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:36:16.687 14:38:26 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:16.687 14:38:26 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:16.944 14:38:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:16.944 14:38:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:36:16.944 14:38:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:16.944 14:38:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:36:16.944 14:38:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:16.944 14:38:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:16.944 rmmod nvme_tcp 00:36:16.944 rmmod nvme_fabrics 00:36:16.944 rmmod nvme_keyring 00:36:16.944 14:38:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:16.944 14:38:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:36:16.944 14:38:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:36:16.944 14:38:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1545476 ']' 00:36:16.944 14:38:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1545476 00:36:16.944 14:38:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1545476 ']' 00:36:16.944 14:38:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1545476 00:36:16.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1545476) - No such process 00:36:16.944 14:38:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1545476 is not found' 00:36:16.944 Process with pid 1545476 is not found 00:36:16.944 14:38:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:16.944 14:38:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:16.944 14:38:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:16.944 14:38:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:16.944 14:38:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:16.944 14:38:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:16.945 14:38:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:16.945 14:38:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:18.846 14:38:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:18.846 00:36:18.846 real 0m52.659s 00:36:18.846 user 1m33.317s 00:36:18.846 sys 0m10.778s 00:36:18.846 14:38:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:18.846 14:38:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:18.846 ************************************ 00:36:18.846 END TEST nvmf_digest 00:36:18.846 ************************************ 00:36:18.846 14:38:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:18.846 14:38:28 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:36:18.846 14:38:28 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:36:18.846 14:38:28 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:36:18.846 14:38:28 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:18.846 14:38:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:18.846 14:38:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:18.846 14:38:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:18.846 ************************************ 00:36:18.846 START TEST nvmf_bdevperf 00:36:18.846 ************************************ 00:36:18.846 14:38:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:19.104 * Looking for test storage... 00:36:19.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.104 14:38:28 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:36:19.105 14:38:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:20.999 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:20.999 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:20.999 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:20.999 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:20.999 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:21.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:21.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:36:21.000 00:36:21.000 --- 10.0.0.2 ping statistics --- 00:36:21.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:21.000 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:21.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:21.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:36:21.000 00:36:21.000 --- 10.0.0.1 ping statistics --- 00:36:21.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:21.000 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1550106 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1550106 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1550106 ']' 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:21.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:21.000 14:38:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:21.000 [2024-07-10 14:38:30.381293] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:36:21.000 [2024-07-10 14:38:30.381457] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:21.000 EAL: No free 2048 kB hugepages reported on node 1 00:36:21.257 [2024-07-10 14:38:30.517209] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:21.513 [2024-07-10 14:38:30.771200] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:21.513 [2024-07-10 14:38:30.771271] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:21.513 [2024-07-10 14:38:30.771315] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:21.513 [2024-07-10 14:38:30.771336] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:21.514 [2024-07-10 14:38:30.771357] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:21.514 [2024-07-10 14:38:30.771512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:21.514 [2024-07-10 14:38:30.771561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:21.514 [2024-07-10 14:38:30.771569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:22.077 [2024-07-10 14:38:31.345101] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:22.077 Malloc0 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:22.077 14:38:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.078 14:38:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:22.078 [2024-07-10 14:38:31.457096] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:22.078 14:38:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.078 14:38:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:36:22.078 14:38:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:36:22.078 14:38:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:36:22.078 14:38:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:36:22.078 14:38:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:22.078 14:38:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:22.078 { 00:36:22.078 "params": { 00:36:22.078 "name": "Nvme$subsystem", 00:36:22.078 "trtype": "$TEST_TRANSPORT", 00:36:22.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:22.078 "adrfam": "ipv4", 00:36:22.078 "trsvcid": "$NVMF_PORT", 00:36:22.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:22.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:22.078 "hdgst": ${hdgst:-false}, 00:36:22.078 "ddgst": ${ddgst:-false} 00:36:22.078 }, 00:36:22.078 "method": "bdev_nvme_attach_controller" 00:36:22.078 } 00:36:22.078 EOF 00:36:22.078 )") 00:36:22.078 14:38:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:36:22.078 14:38:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:36:22.078 14:38:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:36:22.078 14:38:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:22.078 "params": { 00:36:22.078 "name": "Nvme1", 00:36:22.078 "trtype": "tcp", 00:36:22.078 "traddr": "10.0.0.2", 00:36:22.078 "adrfam": "ipv4", 00:36:22.078 "trsvcid": "4420", 00:36:22.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:22.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:22.078 "hdgst": false, 00:36:22.078 "ddgst": false 00:36:22.078 }, 00:36:22.078 "method": "bdev_nvme_attach_controller" 00:36:22.078 }' 00:36:22.078 [2024-07-10 14:38:31.540392] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:36:22.078 [2024-07-10 14:38:31.540591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1550257 ] 00:36:22.335 EAL: No free 2048 kB hugepages reported on node 1 00:36:22.335 [2024-07-10 14:38:31.667721] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:22.592 [2024-07-10 14:38:31.904150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:23.157 Running I/O for 1 seconds... 00:36:24.091 00:36:24.091 Latency(us) 00:36:24.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:24.091 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:24.091 Verification LBA range: start 0x0 length 0x4000 00:36:24.091 Nvme1n1 : 1.01 6304.47 24.63 0.00 0.00 20215.23 2487.94 16893.72 00:36:24.091 =================================================================================================================== 00:36:24.091 Total : 6304.47 24.63 0.00 0.00 20215.23 2487.94 16893.72 00:36:25.022 14:38:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1550534 00:36:25.022 14:38:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:36:25.022 14:38:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:36:25.022 14:38:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:36:25.022 14:38:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:36:25.023 14:38:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:36:25.023 14:38:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:25.023 14:38:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:25.023 { 00:36:25.023 "params": { 00:36:25.023 "name": "Nvme$subsystem", 00:36:25.023 "trtype": "$TEST_TRANSPORT", 00:36:25.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:25.023 "adrfam": "ipv4", 00:36:25.023 "trsvcid": "$NVMF_PORT", 00:36:25.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:25.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:25.023 "hdgst": ${hdgst:-false}, 00:36:25.023 "ddgst": ${ddgst:-false} 00:36:25.023 }, 00:36:25.023 "method": "bdev_nvme_attach_controller" 00:36:25.023 } 00:36:25.023 EOF 00:36:25.023 )") 00:36:25.023 14:38:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:36:25.023 14:38:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:36:25.023 14:38:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:36:25.023 14:38:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:25.023 "params": { 00:36:25.023 "name": "Nvme1", 00:36:25.023 "trtype": "tcp", 00:36:25.023 "traddr": "10.0.0.2", 00:36:25.023 "adrfam": "ipv4", 00:36:25.023 "trsvcid": "4420", 00:36:25.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:25.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:25.023 "hdgst": false, 00:36:25.023 "ddgst": false 00:36:25.023 }, 00:36:25.023 "method": "bdev_nvme_attach_controller" 00:36:25.023 }' 00:36:25.314 [2024-07-10 14:38:34.529737] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:36:25.314 [2024-07-10 14:38:34.529906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1550534 ] 00:36:25.314 EAL: No free 2048 kB hugepages reported on node 1 00:36:25.314 [2024-07-10 14:38:34.663137] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:25.607 [2024-07-10 14:38:34.900470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:26.185 Running I/O for 15 seconds... 00:36:28.089 14:38:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1550106 00:36:28.090 14:38:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:36:28.090 [2024-07-10 14:38:37.477643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:28.090 [2024-07-10 14:38:37.477743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.477816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:28.090 [2024-07-10 14:38:37.477845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.477873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:28.090 [2024-07-10 14:38:37.477898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.477926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:28.090 [2024-07-10 14:38:37.477952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.477980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:28.090 [2024-07-10 14:38:37.478008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.478037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:28.090 [2024-07-10 14:38:37.478061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.478087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:28.090 [2024-07-10 14:38:37.478110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.478137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.478160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.478187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.478211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.478237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.478262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.478288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.478312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.478339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.478363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.478436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.478464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.478507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.478528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.478552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.478573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.478596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.478617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.478641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.478662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.478685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.478706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.478744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.478764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.478804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.478828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.478855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.478878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.478905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.478928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.478955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:28.090 [2024-07-10 14:38:37.478978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.479004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.479028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.479055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.479085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.479113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.479137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.479163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.479186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.479214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.479238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.479265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.479289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.479315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.479339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.479366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.479390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.479436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.479477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.479501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.479522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.479545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.090 [2024-07-10 14:38:37.479565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.090 [2024-07-10 14:38:37.479588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.479608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.479631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.479651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.479673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.479693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.479744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.479765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.479804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.479828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.479855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.479879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.479905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.479929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.479955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.479978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.480005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.480029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.480055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.480079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.480106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.480130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.480156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.480180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.480207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.480231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.480257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.480282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.480309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.480332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.480359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.480384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.480432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.480475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.480501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.480521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.480544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.480565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.480587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.480607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.480630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.480650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.480673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.480693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.480734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.480759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.480786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.480809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.480836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.480860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.480887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.480911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.480937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.480961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.480989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.481013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.481040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.481069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.481096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.481121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.481148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.481172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.481198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.481222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.481248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.481272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.481299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.481322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.481349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.481373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.091 [2024-07-10 14:38:37.481399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.091 [2024-07-10 14:38:37.481439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.481483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.481505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.481528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.481549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.481573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.481594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.481617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.481638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.481661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.481682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.481724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.481746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.481798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.481823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.481851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.481875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.481901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.481925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.481951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.481976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.482002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.482025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.482051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.482075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.482102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.482127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.482153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.482177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.482203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.482228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.482255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.482279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.482305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.482329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.482355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.482384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.482411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.482445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.482487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.482508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.482531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.482552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.482575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.482595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.482619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.482640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.482663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.482684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.482721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.482741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.482763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.482801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.482827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.482851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.482877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.482901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.482927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.482950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.482976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.483000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.483031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.483055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.483081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.483105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.483131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.483154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.483181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.483204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.483230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.483255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.483282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.483305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.483331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.483355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.483381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.483404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.483437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.483477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.092 [2024-07-10 14:38:37.483501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.092 [2024-07-10 14:38:37.483521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.483550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.093 [2024-07-10 14:38:37.483570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.483593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.093 [2024-07-10 14:38:37.483613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.483636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.093 [2024-07-10 14:38:37.483659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.483683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.093 [2024-07-10 14:38:37.483703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.483740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.093 [2024-07-10 14:38:37.483758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.483797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.093 [2024-07-10 14:38:37.483821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.483847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.093 [2024-07-10 14:38:37.483871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.483897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.093 [2024-07-10 14:38:37.483920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.483946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.093 [2024-07-10 14:38:37.483969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.483996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:28.093 [2024-07-10 14:38:37.484019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.484045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.093 [2024-07-10 14:38:37.484068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.484095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.093 [2024-07-10 14:38:37.484118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.484144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.093 [2024-07-10 14:38:37.484167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.484193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.093 [2024-07-10 14:38:37.484217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.484243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.093 [2024-07-10 14:38:37.484267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.484297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.093 [2024-07-10 14:38:37.484322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.484349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:28.093 [2024-07-10 14:38:37.484373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.484398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:36:28.093 [2024-07-10 14:38:37.484433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:28.093 [2024-07-10 14:38:37.484456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:28.093 [2024-07-10 14:38:37.484490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102656 len:8 PRP1 0x0 PRP2 0x0 00:36:28.093 [2024-07-10 14:38:37.484510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.484813] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2c80 was disconnected and freed. reset controller. 00:36:28.093 [2024-07-10 14:38:37.484926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:28.093 [2024-07-10 14:38:37.484963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.484990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:28.093 [2024-07-10 14:38:37.485013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.485036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:28.093 [2024-07-10 14:38:37.485058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.485081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:28.093 [2024-07-10 14:38:37.485103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.093 [2024-07-10 14:38:37.485124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.093 [2024-07-10 14:38:37.489420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.093 [2024-07-10 14:38:37.489510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.093 [2024-07-10 14:38:37.490450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.093 [2024-07-10 14:38:37.490516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.093 [2024-07-10 14:38:37.490542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.093 [2024-07-10 14:38:37.490841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.093 [2024-07-10 14:38:37.491131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.093 [2024-07-10 14:38:37.491164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.093 [2024-07-10 14:38:37.491194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.093 [2024-07-10 14:38:37.495347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.093 [2024-07-10 14:38:37.504273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.093 [2024-07-10 14:38:37.504816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.093 [2024-07-10 14:38:37.504866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.093 [2024-07-10 14:38:37.504892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.093 [2024-07-10 14:38:37.505177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.093 [2024-07-10 14:38:37.505489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.093 [2024-07-10 14:38:37.505530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.093 [2024-07-10 14:38:37.505549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.093 [2024-07-10 14:38:37.509643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.093 [2024-07-10 14:38:37.518808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.093 [2024-07-10 14:38:37.519349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.093 [2024-07-10 14:38:37.519395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.093 [2024-07-10 14:38:37.519418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.093 [2024-07-10 14:38:37.519750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.093 [2024-07-10 14:38:37.520037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.093 [2024-07-10 14:38:37.520069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.093 [2024-07-10 14:38:37.520091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.093 [2024-07-10 14:38:37.524199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.093 [2024-07-10 14:38:37.533397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.093 [2024-07-10 14:38:37.533896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.093 [2024-07-10 14:38:37.533945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.093 [2024-07-10 14:38:37.533971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.093 [2024-07-10 14:38:37.534256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.093 [2024-07-10 14:38:37.534557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.093 [2024-07-10 14:38:37.534590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.093 [2024-07-10 14:38:37.534615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.093 [2024-07-10 14:38:37.538715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.093 [2024-07-10 14:38:37.547899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.093 [2024-07-10 14:38:37.548442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.093 [2024-07-10 14:38:37.548497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.093 [2024-07-10 14:38:37.548523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.093 [2024-07-10 14:38:37.548807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.093 [2024-07-10 14:38:37.549092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.093 [2024-07-10 14:38:37.549124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.093 [2024-07-10 14:38:37.549146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.093 [2024-07-10 14:38:37.553240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.093 [2024-07-10 14:38:37.562347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.093 [2024-07-10 14:38:37.562895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.093 [2024-07-10 14:38:37.562947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.093 [2024-07-10 14:38:37.562973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.093 [2024-07-10 14:38:37.563257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.094 [2024-07-10 14:38:37.563578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.094 [2024-07-10 14:38:37.563623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.094 [2024-07-10 14:38:37.563673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.352 [2024-07-10 14:38:37.567935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.352 [2024-07-10 14:38:37.576987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.352 [2024-07-10 14:38:37.577502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.352 [2024-07-10 14:38:37.577546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.352 [2024-07-10 14:38:37.577576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.352 [2024-07-10 14:38:37.577861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.352 [2024-07-10 14:38:37.578147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.353 [2024-07-10 14:38:37.578179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.353 [2024-07-10 14:38:37.578203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.353 [2024-07-10 14:38:37.582287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.353 [2024-07-10 14:38:37.591396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.353 [2024-07-10 14:38:37.591917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.353 [2024-07-10 14:38:37.591966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.353 [2024-07-10 14:38:37.591993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.353 [2024-07-10 14:38:37.592281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.353 [2024-07-10 14:38:37.592580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.353 [2024-07-10 14:38:37.592612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.353 [2024-07-10 14:38:37.592634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.353 [2024-07-10 14:38:37.596732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.353 [2024-07-10 14:38:37.605863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.353 [2024-07-10 14:38:37.606412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.353 [2024-07-10 14:38:37.606470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.353 [2024-07-10 14:38:37.606496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.353 [2024-07-10 14:38:37.606781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.353 [2024-07-10 14:38:37.607067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.353 [2024-07-10 14:38:37.607099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.353 [2024-07-10 14:38:37.607121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.353 [2024-07-10 14:38:37.611193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.353 [2024-07-10 14:38:37.620288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.353 [2024-07-10 14:38:37.620805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.353 [2024-07-10 14:38:37.620855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.353 [2024-07-10 14:38:37.620880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.353 [2024-07-10 14:38:37.621163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.353 [2024-07-10 14:38:37.621462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.353 [2024-07-10 14:38:37.621494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.353 [2024-07-10 14:38:37.621516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.353 [2024-07-10 14:38:37.625602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.353 [2024-07-10 14:38:37.634704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.353 [2024-07-10 14:38:37.635211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.353 [2024-07-10 14:38:37.635260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.353 [2024-07-10 14:38:37.635286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.353 [2024-07-10 14:38:37.635590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.353 [2024-07-10 14:38:37.635876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.353 [2024-07-10 14:38:37.635907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.353 [2024-07-10 14:38:37.635936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.353 [2024-07-10 14:38:37.639999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.353 [2024-07-10 14:38:37.649083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.353 [2024-07-10 14:38:37.649604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.353 [2024-07-10 14:38:37.649652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.353 [2024-07-10 14:38:37.649678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.353 [2024-07-10 14:38:37.649962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.353 [2024-07-10 14:38:37.650248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.353 [2024-07-10 14:38:37.650280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.353 [2024-07-10 14:38:37.650302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.353 [2024-07-10 14:38:37.654368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.353 [2024-07-10 14:38:37.663468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.353 [2024-07-10 14:38:37.663972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.353 [2024-07-10 14:38:37.664022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.353 [2024-07-10 14:38:37.664047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.353 [2024-07-10 14:38:37.664331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.353 [2024-07-10 14:38:37.664629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.353 [2024-07-10 14:38:37.664662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.353 [2024-07-10 14:38:37.664684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.353 [2024-07-10 14:38:37.668751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.353 [2024-07-10 14:38:37.677819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.353 [2024-07-10 14:38:37.678366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.353 [2024-07-10 14:38:37.678415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.353 [2024-07-10 14:38:37.678452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.353 [2024-07-10 14:38:37.678736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.353 [2024-07-10 14:38:37.679020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.353 [2024-07-10 14:38:37.679052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.353 [2024-07-10 14:38:37.679074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.353 [2024-07-10 14:38:37.683122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.353 [2024-07-10 14:38:37.692165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.353 [2024-07-10 14:38:37.692683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.353 [2024-07-10 14:38:37.692730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.353 [2024-07-10 14:38:37.692756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.353 [2024-07-10 14:38:37.693038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.353 [2024-07-10 14:38:37.693321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.353 [2024-07-10 14:38:37.693353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.353 [2024-07-10 14:38:37.693375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.353 [2024-07-10 14:38:37.697416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.353 [2024-07-10 14:38:37.706709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.353 [2024-07-10 14:38:37.707219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.353 [2024-07-10 14:38:37.707269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.353 [2024-07-10 14:38:37.707308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.353 [2024-07-10 14:38:37.707605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.353 [2024-07-10 14:38:37.707889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.353 [2024-07-10 14:38:37.707921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.353 [2024-07-10 14:38:37.707943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.353 [2024-07-10 14:38:37.711984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.353 [2024-07-10 14:38:37.721287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.353 [2024-07-10 14:38:37.721801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.353 [2024-07-10 14:38:37.721851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.353 [2024-07-10 14:38:37.721877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.353 [2024-07-10 14:38:37.722160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.353 [2024-07-10 14:38:37.722455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.353 [2024-07-10 14:38:37.722487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.353 [2024-07-10 14:38:37.722509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.353 [2024-07-10 14:38:37.726555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.353 [2024-07-10 14:38:37.735831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.353 [2024-07-10 14:38:37.736304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.353 [2024-07-10 14:38:37.736354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.353 [2024-07-10 14:38:37.736380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.353 [2024-07-10 14:38:37.736680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.354 [2024-07-10 14:38:37.736974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.354 [2024-07-10 14:38:37.737007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.354 [2024-07-10 14:38:37.737029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.354 [2024-07-10 14:38:37.741083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.354 [2024-07-10 14:38:37.750372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.354 [2024-07-10 14:38:37.750916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.354 [2024-07-10 14:38:37.750963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.354 [2024-07-10 14:38:37.750988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.354 [2024-07-10 14:38:37.751270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.354 [2024-07-10 14:38:37.751564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.354 [2024-07-10 14:38:37.751598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.354 [2024-07-10 14:38:37.751625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.354 [2024-07-10 14:38:37.755671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.354 [2024-07-10 14:38:37.764717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.354 [2024-07-10 14:38:37.765365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.354 [2024-07-10 14:38:37.765414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.354 [2024-07-10 14:38:37.765453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.354 [2024-07-10 14:38:37.765736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.354 [2024-07-10 14:38:37.766020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.354 [2024-07-10 14:38:37.766052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.354 [2024-07-10 14:38:37.766074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.354 [2024-07-10 14:38:37.770122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.354 [2024-07-10 14:38:37.779160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.354 [2024-07-10 14:38:37.779653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.354 [2024-07-10 14:38:37.779694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.354 [2024-07-10 14:38:37.779721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.354 [2024-07-10 14:38:37.780004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.354 [2024-07-10 14:38:37.780286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.354 [2024-07-10 14:38:37.780318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.354 [2024-07-10 14:38:37.780345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.354 [2024-07-10 14:38:37.784417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.354 [2024-07-10 14:38:37.793814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.354 [2024-07-10 14:38:37.794377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.354 [2024-07-10 14:38:37.794419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.354 [2024-07-10 14:38:37.794456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.354 [2024-07-10 14:38:37.794741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.354 [2024-07-10 14:38:37.795024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.354 [2024-07-10 14:38:37.795055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.354 [2024-07-10 14:38:37.795077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.354 [2024-07-10 14:38:37.799133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.354 [2024-07-10 14:38:37.808233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.354 [2024-07-10 14:38:37.808762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.354 [2024-07-10 14:38:37.808804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.354 [2024-07-10 14:38:37.808829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.354 [2024-07-10 14:38:37.809111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.354 [2024-07-10 14:38:37.809395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.354 [2024-07-10 14:38:37.809435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.354 [2024-07-10 14:38:37.809460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.354 [2024-07-10 14:38:37.813524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.354 [2024-07-10 14:38:37.822582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.354 [2024-07-10 14:38:37.823052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.354 [2024-07-10 14:38:37.823093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.354 [2024-07-10 14:38:37.823119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.354 [2024-07-10 14:38:37.823400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.354 [2024-07-10 14:38:37.823693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.354 [2024-07-10 14:38:37.823725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.354 [2024-07-10 14:38:37.823747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.354 [2024-07-10 14:38:37.827788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.612 [2024-07-10 14:38:37.837200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.612 [2024-07-10 14:38:37.837719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.612 [2024-07-10 14:38:37.837775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.612 [2024-07-10 14:38:37.837808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.612 [2024-07-10 14:38:37.838093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.612 [2024-07-10 14:38:37.838376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.612 [2024-07-10 14:38:37.838407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.612 [2024-07-10 14:38:37.838441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.612 [2024-07-10 14:38:37.842492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.612 [2024-07-10 14:38:37.851787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.613 [2024-07-10 14:38:37.852265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-07-10 14:38:37.852307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-07-10 14:38:37.852333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.613 [2024-07-10 14:38:37.852630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.613 [2024-07-10 14:38:37.852914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.613 [2024-07-10 14:38:37.852944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.613 [2024-07-10 14:38:37.852967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.613 [2024-07-10 14:38:37.857016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.613 [2024-07-10 14:38:37.866304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.613 [2024-07-10 14:38:37.866805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-07-10 14:38:37.866847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-07-10 14:38:37.866872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.613 [2024-07-10 14:38:37.867155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.613 [2024-07-10 14:38:37.867450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.613 [2024-07-10 14:38:37.867482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.613 [2024-07-10 14:38:37.867504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.613 [2024-07-10 14:38:37.871552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.613 [2024-07-10 14:38:37.880837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.613 [2024-07-10 14:38:37.881331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-07-10 14:38:37.881370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-07-10 14:38:37.881395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.613 [2024-07-10 14:38:37.881692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.613 [2024-07-10 14:38:37.881976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.613 [2024-07-10 14:38:37.882007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.613 [2024-07-10 14:38:37.882030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.613 [2024-07-10 14:38:37.886081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.613 [2024-07-10 14:38:37.895359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.613 [2024-07-10 14:38:37.895892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-07-10 14:38:37.895934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-07-10 14:38:37.895960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.613 [2024-07-10 14:38:37.896241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.613 [2024-07-10 14:38:37.896535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.613 [2024-07-10 14:38:37.896567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.613 [2024-07-10 14:38:37.896589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.613 [2024-07-10 14:38:37.900633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.613 [2024-07-10 14:38:37.909939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.613 [2024-07-10 14:38:37.910533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-07-10 14:38:37.910574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-07-10 14:38:37.910599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.613 [2024-07-10 14:38:37.910881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.613 [2024-07-10 14:38:37.911177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.613 [2024-07-10 14:38:37.911208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.613 [2024-07-10 14:38:37.911239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.613 [2024-07-10 14:38:37.915310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.613 [2024-07-10 14:38:37.924385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.613 [2024-07-10 14:38:37.924921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-07-10 14:38:37.924963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-07-10 14:38:37.924988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.613 [2024-07-10 14:38:37.925268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.613 [2024-07-10 14:38:37.925565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.613 [2024-07-10 14:38:37.925596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.613 [2024-07-10 14:38:37.925623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.613 [2024-07-10 14:38:37.929686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.613 [2024-07-10 14:38:37.938757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.613 [2024-07-10 14:38:37.939244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-07-10 14:38:37.939285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-07-10 14:38:37.939309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.613 [2024-07-10 14:38:37.939602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.613 [2024-07-10 14:38:37.939885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.613 [2024-07-10 14:38:37.939916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.613 [2024-07-10 14:38:37.939938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.613 [2024-07-10 14:38:37.943979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.613 [2024-07-10 14:38:37.953298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.613 [2024-07-10 14:38:37.953835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-07-10 14:38:37.953875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-07-10 14:38:37.953901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.613 [2024-07-10 14:38:37.954182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.613 [2024-07-10 14:38:37.954477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.613 [2024-07-10 14:38:37.954508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.613 [2024-07-10 14:38:37.954530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.613 [2024-07-10 14:38:37.958578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.613 [2024-07-10 14:38:37.967883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.613 [2024-07-10 14:38:37.968510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-07-10 14:38:37.968551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-07-10 14:38:37.968577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.613 [2024-07-10 14:38:37.968858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.613 [2024-07-10 14:38:37.969141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.613 [2024-07-10 14:38:37.969175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.613 [2024-07-10 14:38:37.969197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.613 [2024-07-10 14:38:37.973259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.613 [2024-07-10 14:38:37.982326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.613 [2024-07-10 14:38:37.982859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-07-10 14:38:37.982900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-07-10 14:38:37.982926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.613 [2024-07-10 14:38:37.983208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.613 [2024-07-10 14:38:37.983509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.613 [2024-07-10 14:38:37.983540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.613 [2024-07-10 14:38:37.983562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.613 [2024-07-10 14:38:37.987621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.613 [2024-07-10 14:38:37.996697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.613 [2024-07-10 14:38:37.997196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-07-10 14:38:37.997237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-07-10 14:38:37.997262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.613 [2024-07-10 14:38:37.997556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.613 [2024-07-10 14:38:37.997839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.613 [2024-07-10 14:38:37.997870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.613 [2024-07-10 14:38:37.997891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.613 [2024-07-10 14:38:38.001945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.613 [2024-07-10 14:38:38.011244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.613 [2024-07-10 14:38:38.011732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-07-10 14:38:38.011778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-07-10 14:38:38.011803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.613 [2024-07-10 14:38:38.012083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.613 [2024-07-10 14:38:38.012365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.613 [2024-07-10 14:38:38.012397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.613 [2024-07-10 14:38:38.012419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.613 [2024-07-10 14:38:38.016498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.613 [2024-07-10 14:38:38.025780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.613 [2024-07-10 14:38:38.026260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-07-10 14:38:38.026300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-07-10 14:38:38.026326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.613 [2024-07-10 14:38:38.026624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.613 [2024-07-10 14:38:38.026907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.613 [2024-07-10 14:38:38.026938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.613 [2024-07-10 14:38:38.026960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.613 [2024-07-10 14:38:38.031005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.613 [2024-07-10 14:38:38.040305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.613 [2024-07-10 14:38:38.040817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-07-10 14:38:38.040858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-07-10 14:38:38.040884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.613 [2024-07-10 14:38:38.041165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.613 [2024-07-10 14:38:38.041460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.613 [2024-07-10 14:38:38.041492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.613 [2024-07-10 14:38:38.041514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.613 [2024-07-10 14:38:38.045569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.613 [2024-07-10 14:38:38.054858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.613 [2024-07-10 14:38:38.055473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-07-10 14:38:38.055540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-07-10 14:38:38.055566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.614 [2024-07-10 14:38:38.055854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.614 [2024-07-10 14:38:38.056137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.614 [2024-07-10 14:38:38.056168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.614 [2024-07-10 14:38:38.056189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.614 [2024-07-10 14:38:38.060243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.614 [2024-07-10 14:38:38.069294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.614 [2024-07-10 14:38:38.069813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.614 [2024-07-10 14:38:38.069855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.614 [2024-07-10 14:38:38.069880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.614 [2024-07-10 14:38:38.070162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.614 [2024-07-10 14:38:38.070458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.614 [2024-07-10 14:38:38.070489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.614 [2024-07-10 14:38:38.070517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.614 [2024-07-10 14:38:38.074565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.614 [2024-07-10 14:38:38.083845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.614 [2024-07-10 14:38:38.084486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.614 [2024-07-10 14:38:38.084528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.614 [2024-07-10 14:38:38.084553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.614 [2024-07-10 14:38:38.084833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.614 [2024-07-10 14:38:38.085115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.614 [2024-07-10 14:38:38.085146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.614 [2024-07-10 14:38:38.085168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.614 [2024-07-10 14:38:38.089274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.872 [2024-07-10 14:38:38.098454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.872 [2024-07-10 14:38:38.098980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.872 [2024-07-10 14:38:38.099022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.872 [2024-07-10 14:38:38.099048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.872 [2024-07-10 14:38:38.099330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.872 [2024-07-10 14:38:38.099627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.872 [2024-07-10 14:38:38.099658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.872 [2024-07-10 14:38:38.099680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.872 [2024-07-10 14:38:38.103724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.872 [2024-07-10 14:38:38.112805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.872 [2024-07-10 14:38:38.113280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.872 [2024-07-10 14:38:38.113323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.872 [2024-07-10 14:38:38.113348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.872 [2024-07-10 14:38:38.113643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.872 [2024-07-10 14:38:38.113927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.872 [2024-07-10 14:38:38.113958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.872 [2024-07-10 14:38:38.113994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.872 [2024-07-10 14:38:38.118046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.872 [2024-07-10 14:38:38.127344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.872 [2024-07-10 14:38:38.127859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.872 [2024-07-10 14:38:38.127901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.872 [2024-07-10 14:38:38.127927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.872 [2024-07-10 14:38:38.128208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.872 [2024-07-10 14:38:38.128506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.872 [2024-07-10 14:38:38.128538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.872 [2024-07-10 14:38:38.128559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.872 [2024-07-10 14:38:38.132611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.872 [2024-07-10 14:38:38.141903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.872 [2024-07-10 14:38:38.142392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.872 [2024-07-10 14:38:38.142442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.872 [2024-07-10 14:38:38.142469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.872 [2024-07-10 14:38:38.142751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.872 [2024-07-10 14:38:38.143034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.872 [2024-07-10 14:38:38.143065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.873 [2024-07-10 14:38:38.143087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.873 [2024-07-10 14:38:38.147136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.873 [2024-07-10 14:38:38.156417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.873 [2024-07-10 14:38:38.156918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.873 [2024-07-10 14:38:38.156959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.873 [2024-07-10 14:38:38.156985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.873 [2024-07-10 14:38:38.157266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.873 [2024-07-10 14:38:38.157563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.873 [2024-07-10 14:38:38.157594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.873 [2024-07-10 14:38:38.157616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.873 [2024-07-10 14:38:38.161656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.873 [2024-07-10 14:38:38.170937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.873 [2024-07-10 14:38:38.171443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.873 [2024-07-10 14:38:38.171485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.873 [2024-07-10 14:38:38.171510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.873 [2024-07-10 14:38:38.171797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.873 [2024-07-10 14:38:38.172080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.873 [2024-07-10 14:38:38.172112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.873 [2024-07-10 14:38:38.172133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.873 [2024-07-10 14:38:38.176197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.873 [2024-07-10 14:38:38.185490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.873 [2024-07-10 14:38:38.185996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.873 [2024-07-10 14:38:38.186037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.873 [2024-07-10 14:38:38.186063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.873 [2024-07-10 14:38:38.186344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.873 [2024-07-10 14:38:38.186641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.873 [2024-07-10 14:38:38.186672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.873 [2024-07-10 14:38:38.186694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.873 [2024-07-10 14:38:38.190743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.873 [2024-07-10 14:38:38.200019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.873 [2024-07-10 14:38:38.200521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.873 [2024-07-10 14:38:38.200561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.873 [2024-07-10 14:38:38.200586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.873 [2024-07-10 14:38:38.200867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.873 [2024-07-10 14:38:38.201150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.873 [2024-07-10 14:38:38.201180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.873 [2024-07-10 14:38:38.201202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.873 [2024-07-10 14:38:38.205243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.873 [2024-07-10 14:38:38.214570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.873 [2024-07-10 14:38:38.215055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.873 [2024-07-10 14:38:38.215096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.873 [2024-07-10 14:38:38.215122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.873 [2024-07-10 14:38:38.215402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.873 [2024-07-10 14:38:38.215697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.873 [2024-07-10 14:38:38.215736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.873 [2024-07-10 14:38:38.215759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.873 [2024-07-10 14:38:38.219811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.873 [2024-07-10 14:38:38.229105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.873 [2024-07-10 14:38:38.229621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.873 [2024-07-10 14:38:38.229663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.873 [2024-07-10 14:38:38.229688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.873 [2024-07-10 14:38:38.229969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.873 [2024-07-10 14:38:38.230252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.873 [2024-07-10 14:38:38.230282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.873 [2024-07-10 14:38:38.230304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.873 [2024-07-10 14:38:38.234357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.873 [2024-07-10 14:38:38.243642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.873 [2024-07-10 14:38:38.244146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.873 [2024-07-10 14:38:38.244187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.873 [2024-07-10 14:38:38.244212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.873 [2024-07-10 14:38:38.244513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.873 [2024-07-10 14:38:38.244796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.873 [2024-07-10 14:38:38.244827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.873 [2024-07-10 14:38:38.244848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.873 [2024-07-10 14:38:38.248896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.873 [2024-07-10 14:38:38.258180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.873 [2024-07-10 14:38:38.258677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.873 [2024-07-10 14:38:38.258718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.873 [2024-07-10 14:38:38.258743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.873 [2024-07-10 14:38:38.259024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.873 [2024-07-10 14:38:38.259308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.873 [2024-07-10 14:38:38.259340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.873 [2024-07-10 14:38:38.259361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.873 [2024-07-10 14:38:38.263414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.873 [2024-07-10 14:38:38.272706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.873 [2024-07-10 14:38:38.273200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.873 [2024-07-10 14:38:38.273241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.873 [2024-07-10 14:38:38.273265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.873 [2024-07-10 14:38:38.273561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.873 [2024-07-10 14:38:38.273844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.873 [2024-07-10 14:38:38.273874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.873 [2024-07-10 14:38:38.273896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.873 [2024-07-10 14:38:38.277943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.873 [2024-07-10 14:38:38.287223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.873 [2024-07-10 14:38:38.287723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.873 [2024-07-10 14:38:38.287764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.873 [2024-07-10 14:38:38.287790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.873 [2024-07-10 14:38:38.288071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.873 [2024-07-10 14:38:38.288354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.873 [2024-07-10 14:38:38.288385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.873 [2024-07-10 14:38:38.288406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.873 [2024-07-10 14:38:38.292470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.873 [2024-07-10 14:38:38.301759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.873 [2024-07-10 14:38:38.302282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.873 [2024-07-10 14:38:38.302324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.873 [2024-07-10 14:38:38.302349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.873 [2024-07-10 14:38:38.302643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.873 [2024-07-10 14:38:38.302926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.873 [2024-07-10 14:38:38.302958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.873 [2024-07-10 14:38:38.302979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.873 [2024-07-10 14:38:38.307014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.873 [2024-07-10 14:38:38.316287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.873 [2024-07-10 14:38:38.316764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.873 [2024-07-10 14:38:38.316806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.873 [2024-07-10 14:38:38.316837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.873 [2024-07-10 14:38:38.317119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.873 [2024-07-10 14:38:38.317402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.873 [2024-07-10 14:38:38.317444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.873 [2024-07-10 14:38:38.317468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.873 [2024-07-10 14:38:38.321504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.873 [2024-07-10 14:38:38.330797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.873 [2024-07-10 14:38:38.331292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.873 [2024-07-10 14:38:38.331333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.873 [2024-07-10 14:38:38.331359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.873 [2024-07-10 14:38:38.331651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.873 [2024-07-10 14:38:38.331932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.873 [2024-07-10 14:38:38.331963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.873 [2024-07-10 14:38:38.331985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.873 [2024-07-10 14:38:38.336024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.873 [2024-07-10 14:38:38.345298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.873 [2024-07-10 14:38:38.345802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.873 [2024-07-10 14:38:38.345843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.873 [2024-07-10 14:38:38.345868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:28.873 [2024-07-10 14:38:38.346149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.873 [2024-07-10 14:38:38.346443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.873 [2024-07-10 14:38:38.346474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.873 [2024-07-10 14:38:38.346496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.873 [2024-07-10 14:38:38.350679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.132 [2024-07-10 14:38:38.359779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.132 [2024-07-10 14:38:38.360289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.132 [2024-07-10 14:38:38.360331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.132 [2024-07-10 14:38:38.360357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.132 [2024-07-10 14:38:38.360654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.132 [2024-07-10 14:38:38.360936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.132 [2024-07-10 14:38:38.360972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.132 [2024-07-10 14:38:38.360995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.132 [2024-07-10 14:38:38.365044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.132 [2024-07-10 14:38:38.374324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.132 [2024-07-10 14:38:38.374845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.132 [2024-07-10 14:38:38.374887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.132 [2024-07-10 14:38:38.374912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.132 [2024-07-10 14:38:38.375193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.132 [2024-07-10 14:38:38.375490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.132 [2024-07-10 14:38:38.375522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.132 [2024-07-10 14:38:38.375544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.132 [2024-07-10 14:38:38.379604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.132 [2024-07-10 14:38:38.388885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.132 [2024-07-10 14:38:38.389361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.132 [2024-07-10 14:38:38.389402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.132 [2024-07-10 14:38:38.389437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.132 [2024-07-10 14:38:38.389722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.132 [2024-07-10 14:38:38.390005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.132 [2024-07-10 14:38:38.390035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.132 [2024-07-10 14:38:38.390056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.132 [2024-07-10 14:38:38.394099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.132 [2024-07-10 14:38:38.403394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.132 [2024-07-10 14:38:38.403872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.132 [2024-07-10 14:38:38.403913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.132 [2024-07-10 14:38:38.403939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.132 [2024-07-10 14:38:38.404219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.132 [2024-07-10 14:38:38.404514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.132 [2024-07-10 14:38:38.404546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.132 [2024-07-10 14:38:38.404568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.132 [2024-07-10 14:38:38.408611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.132 [2024-07-10 14:38:38.417894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.132 [2024-07-10 14:38:38.418419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.132 [2024-07-10 14:38:38.418468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.132 [2024-07-10 14:38:38.418493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.132 [2024-07-10 14:38:38.418774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.132 [2024-07-10 14:38:38.419057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.132 [2024-07-10 14:38:38.419088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.132 [2024-07-10 14:38:38.419111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.132 [2024-07-10 14:38:38.423152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.132 [2024-07-10 14:38:38.432436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.132 [2024-07-10 14:38:38.432934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.132 [2024-07-10 14:38:38.432975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.132 [2024-07-10 14:38:38.433000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.132 [2024-07-10 14:38:38.433281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.132 [2024-07-10 14:38:38.433576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.132 [2024-07-10 14:38:38.433608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.132 [2024-07-10 14:38:38.433629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.132 [2024-07-10 14:38:38.437673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.132 [2024-07-10 14:38:38.446956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.132 [2024-07-10 14:38:38.447460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.132 [2024-07-10 14:38:38.447501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.132 [2024-07-10 14:38:38.447525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.132 [2024-07-10 14:38:38.447826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.132 [2024-07-10 14:38:38.448110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.132 [2024-07-10 14:38:38.448141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.132 [2024-07-10 14:38:38.448163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.132 [2024-07-10 14:38:38.452211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.132 [2024-07-10 14:38:38.461498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.132 [2024-07-10 14:38:38.461966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.132 [2024-07-10 14:38:38.462007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.132 [2024-07-10 14:38:38.462038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.132 [2024-07-10 14:38:38.462319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.132 [2024-07-10 14:38:38.462624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.132 [2024-07-10 14:38:38.462656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.132 [2024-07-10 14:38:38.462681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.132 [2024-07-10 14:38:38.466727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.132 [2024-07-10 14:38:38.476003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.132 [2024-07-10 14:38:38.476509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.132 [2024-07-10 14:38:38.476551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.132 [2024-07-10 14:38:38.476577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.132 [2024-07-10 14:38:38.476859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.132 [2024-07-10 14:38:38.477141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.132 [2024-07-10 14:38:38.477172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.132 [2024-07-10 14:38:38.477193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.132 [2024-07-10 14:38:38.481233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.132 [2024-07-10 14:38:38.490688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.132 [2024-07-10 14:38:38.491171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.133 [2024-07-10 14:38:38.491213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.133 [2024-07-10 14:38:38.491239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.133 [2024-07-10 14:38:38.491534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.133 [2024-07-10 14:38:38.491818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.133 [2024-07-10 14:38:38.491849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.133 [2024-07-10 14:38:38.491870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.133 [2024-07-10 14:38:38.495929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.133 [2024-07-10 14:38:38.505209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.133 [2024-07-10 14:38:38.505686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.133 [2024-07-10 14:38:38.505727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.133 [2024-07-10 14:38:38.505752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.133 [2024-07-10 14:38:38.506034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.133 [2024-07-10 14:38:38.506316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.133 [2024-07-10 14:38:38.506353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.133 [2024-07-10 14:38:38.506376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.133 [2024-07-10 14:38:38.510433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.133 [2024-07-10 14:38:38.519746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.133 [2024-07-10 14:38:38.520326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.133 [2024-07-10 14:38:38.520367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.133 [2024-07-10 14:38:38.520393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.133 [2024-07-10 14:38:38.520683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.133 [2024-07-10 14:38:38.520966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.133 [2024-07-10 14:38:38.520997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.133 [2024-07-10 14:38:38.521019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.133 [2024-07-10 14:38:38.525057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.133 [2024-07-10 14:38:38.534086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.133 [2024-07-10 14:38:38.534587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.133 [2024-07-10 14:38:38.534662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.133 [2024-07-10 14:38:38.534688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.133 [2024-07-10 14:38:38.534972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.133 [2024-07-10 14:38:38.535255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.133 [2024-07-10 14:38:38.535286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.133 [2024-07-10 14:38:38.535307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.133 [2024-07-10 14:38:38.539356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.133 [2024-07-10 14:38:38.548651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.133 [2024-07-10 14:38:38.549147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.133 [2024-07-10 14:38:38.549188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.133 [2024-07-10 14:38:38.549213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.133 [2024-07-10 14:38:38.549505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.133 [2024-07-10 14:38:38.549788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.133 [2024-07-10 14:38:38.549818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.133 [2024-07-10 14:38:38.549840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.133 [2024-07-10 14:38:38.553885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.133 [2024-07-10 14:38:38.563193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.133 [2024-07-10 14:38:38.563711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.133 [2024-07-10 14:38:38.563753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.133 [2024-07-10 14:38:38.563778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.133 [2024-07-10 14:38:38.564059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.133 [2024-07-10 14:38:38.564342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.133 [2024-07-10 14:38:38.564372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.133 [2024-07-10 14:38:38.564393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.133 [2024-07-10 14:38:38.568450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.133 [2024-07-10 14:38:38.577730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.133 [2024-07-10 14:38:38.578223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.133 [2024-07-10 14:38:38.578264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.133 [2024-07-10 14:38:38.578289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.133 [2024-07-10 14:38:38.578581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.133 [2024-07-10 14:38:38.578864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.133 [2024-07-10 14:38:38.578895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.133 [2024-07-10 14:38:38.578917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.133 [2024-07-10 14:38:38.582954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.133 [2024-07-10 14:38:38.592224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.133 [2024-07-10 14:38:38.592710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.133 [2024-07-10 14:38:38.592750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.133 [2024-07-10 14:38:38.592775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.133 [2024-07-10 14:38:38.593055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.133 [2024-07-10 14:38:38.593344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.133 [2024-07-10 14:38:38.593374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.133 [2024-07-10 14:38:38.593396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.133 [2024-07-10 14:38:38.597451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.133 [2024-07-10 14:38:38.606745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.133 [2024-07-10 14:38:38.607226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.133 [2024-07-10 14:38:38.607267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.133 [2024-07-10 14:38:38.607298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.133 [2024-07-10 14:38:38.607595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.133 [2024-07-10 14:38:38.607879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.133 [2024-07-10 14:38:38.607910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.133 [2024-07-10 14:38:38.607931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.391 [2024-07-10 14:38:38.612213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.391 [2024-07-10 14:38:38.621251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.391 [2024-07-10 14:38:38.621757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.391 [2024-07-10 14:38:38.621799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.391 [2024-07-10 14:38:38.621824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.391 [2024-07-10 14:38:38.622108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.391 [2024-07-10 14:38:38.622392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.391 [2024-07-10 14:38:38.622433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.391 [2024-07-10 14:38:38.622458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.392 [2024-07-10 14:38:38.626504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.392 [2024-07-10 14:38:38.635789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.392 [2024-07-10 14:38:38.636282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.392 [2024-07-10 14:38:38.636323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.392 [2024-07-10 14:38:38.636348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.392 [2024-07-10 14:38:38.636642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.392 [2024-07-10 14:38:38.636924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.392 [2024-07-10 14:38:38.636955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.392 [2024-07-10 14:38:38.636976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.392 [2024-07-10 14:38:38.641022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.392 [2024-07-10 14:38:38.650292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.392 [2024-07-10 14:38:38.650790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.392 [2024-07-10 14:38:38.650831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.392 [2024-07-10 14:38:38.650857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.392 [2024-07-10 14:38:38.651137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.392 [2024-07-10 14:38:38.651418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.392 [2024-07-10 14:38:38.651466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.392 [2024-07-10 14:38:38.651489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.392 [2024-07-10 14:38:38.655531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.392 [2024-07-10 14:38:38.664824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.392 [2024-07-10 14:38:38.665324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.392 [2024-07-10 14:38:38.665365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.392 [2024-07-10 14:38:38.665391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.392 [2024-07-10 14:38:38.665685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.392 [2024-07-10 14:38:38.665967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.392 [2024-07-10 14:38:38.665997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.392 [2024-07-10 14:38:38.666019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.392 [2024-07-10 14:38:38.670062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.392 [2024-07-10 14:38:38.679331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.392 [2024-07-10 14:38:38.679824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.392 [2024-07-10 14:38:38.679865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.392 [2024-07-10 14:38:38.679891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.392 [2024-07-10 14:38:38.680172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.392 [2024-07-10 14:38:38.680468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.392 [2024-07-10 14:38:38.680499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.392 [2024-07-10 14:38:38.680521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.392 [2024-07-10 14:38:38.684564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.392 [2024-07-10 14:38:38.693844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.392 [2024-07-10 14:38:38.694484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.392 [2024-07-10 14:38:38.694526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.392 [2024-07-10 14:38:38.694551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.392 [2024-07-10 14:38:38.694831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.392 [2024-07-10 14:38:38.695114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.392 [2024-07-10 14:38:38.695145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.392 [2024-07-10 14:38:38.695166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.392 [2024-07-10 14:38:38.699216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.392 [2024-07-10 14:38:38.708251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.392 [2024-07-10 14:38:38.708754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.392 [2024-07-10 14:38:38.708794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.392 [2024-07-10 14:38:38.708819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.392 [2024-07-10 14:38:38.709098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.392 [2024-07-10 14:38:38.709381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.392 [2024-07-10 14:38:38.709411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.392 [2024-07-10 14:38:38.709445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.392 [2024-07-10 14:38:38.713494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.392 [2024-07-10 14:38:38.722773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.392 [2024-07-10 14:38:38.723283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.392 [2024-07-10 14:38:38.723323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.392 [2024-07-10 14:38:38.723349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.392 [2024-07-10 14:38:38.723643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.392 [2024-07-10 14:38:38.723927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.392 [2024-07-10 14:38:38.723958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.392 [2024-07-10 14:38:38.723979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.392 [2024-07-10 14:38:38.728027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.392 [2024-07-10 14:38:38.737303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.392 [2024-07-10 14:38:38.737781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.392 [2024-07-10 14:38:38.737821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.392 [2024-07-10 14:38:38.737847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.392 [2024-07-10 14:38:38.738146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.392 [2024-07-10 14:38:38.738441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.392 [2024-07-10 14:38:38.738472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.392 [2024-07-10 14:38:38.738493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.392 [2024-07-10 14:38:38.742533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.392 [2024-07-10 14:38:38.751874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.392 [2024-07-10 14:38:38.752375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.392 [2024-07-10 14:38:38.752417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.392 [2024-07-10 14:38:38.752459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.392 [2024-07-10 14:38:38.752743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.392 [2024-07-10 14:38:38.753027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.392 [2024-07-10 14:38:38.753058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.392 [2024-07-10 14:38:38.753079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.392 [2024-07-10 14:38:38.757117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.392 [2024-07-10 14:38:38.766390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.392 [2024-07-10 14:38:38.766902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.392 [2024-07-10 14:38:38.766943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.392 [2024-07-10 14:38:38.766968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.392 [2024-07-10 14:38:38.767250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.392 [2024-07-10 14:38:38.767544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.392 [2024-07-10 14:38:38.767576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.392 [2024-07-10 14:38:38.767598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.392 [2024-07-10 14:38:38.771641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.392 [2024-07-10 14:38:38.780920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.392 [2024-07-10 14:38:38.781437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.392 [2024-07-10 14:38:38.781478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.392 [2024-07-10 14:38:38.781503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.392 [2024-07-10 14:38:38.781785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.392 [2024-07-10 14:38:38.782068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.393 [2024-07-10 14:38:38.782098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.393 [2024-07-10 14:38:38.782120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.393 [2024-07-10 14:38:38.786173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.393 [2024-07-10 14:38:38.795488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.393 [2024-07-10 14:38:38.796004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.393 [2024-07-10 14:38:38.796046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.393 [2024-07-10 14:38:38.796073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.393 [2024-07-10 14:38:38.796354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.393 [2024-07-10 14:38:38.796657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.393 [2024-07-10 14:38:38.796688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.393 [2024-07-10 14:38:38.796710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.393 [2024-07-10 14:38:38.800768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.393 [2024-07-10 14:38:38.809824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.393 [2024-07-10 14:38:38.810363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.393 [2024-07-10 14:38:38.810404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.393 [2024-07-10 14:38:38.810438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.393 [2024-07-10 14:38:38.810722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.393 [2024-07-10 14:38:38.811005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.393 [2024-07-10 14:38:38.811036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.393 [2024-07-10 14:38:38.811057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.393 [2024-07-10 14:38:38.815190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.393 [2024-07-10 14:38:38.824242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.393 [2024-07-10 14:38:38.824762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.393 [2024-07-10 14:38:38.824804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.393 [2024-07-10 14:38:38.824830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.393 [2024-07-10 14:38:38.825111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.393 [2024-07-10 14:38:38.825395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.393 [2024-07-10 14:38:38.825436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.393 [2024-07-10 14:38:38.825461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.393 [2024-07-10 14:38:38.829509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.393 [2024-07-10 14:38:38.838788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.393 [2024-07-10 14:38:38.839294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.393 [2024-07-10 14:38:38.839335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.393 [2024-07-10 14:38:38.839361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.393 [2024-07-10 14:38:38.839653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.393 [2024-07-10 14:38:38.839935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.393 [2024-07-10 14:38:38.839966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.393 [2024-07-10 14:38:38.839988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.393 [2024-07-10 14:38:38.844043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.393 [2024-07-10 14:38:38.853323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.393 [2024-07-10 14:38:38.853870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.393 [2024-07-10 14:38:38.853911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.393 [2024-07-10 14:38:38.853937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.393 [2024-07-10 14:38:38.854218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.393 [2024-07-10 14:38:38.854514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.393 [2024-07-10 14:38:38.854546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.393 [2024-07-10 14:38:38.854567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.393 [2024-07-10 14:38:38.858611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.393 [2024-07-10 14:38:38.867944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.393 [2024-07-10 14:38:38.868421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.393 [2024-07-10 14:38:38.868471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.393 [2024-07-10 14:38:38.868497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.393 [2024-07-10 14:38:38.868798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.393 [2024-07-10 14:38:38.869082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.393 [2024-07-10 14:38:38.869112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.393 [2024-07-10 14:38:38.869134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.651 [2024-07-10 14:38:38.873375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.651 [2024-07-10 14:38:38.882343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.651 [2024-07-10 14:38:38.882878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.651 [2024-07-10 14:38:38.882920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.651 [2024-07-10 14:38:38.882946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.651 [2024-07-10 14:38:38.883226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.651 [2024-07-10 14:38:38.883527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.651 [2024-07-10 14:38:38.883559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.651 [2024-07-10 14:38:38.883580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.651 [2024-07-10 14:38:38.887619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.651 [2024-07-10 14:38:38.896907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.651 [2024-07-10 14:38:38.897410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.651 [2024-07-10 14:38:38.897459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.651 [2024-07-10 14:38:38.897491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.651 [2024-07-10 14:38:38.897774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.651 [2024-07-10 14:38:38.898057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.651 [2024-07-10 14:38:38.898088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.651 [2024-07-10 14:38:38.898110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.651 [2024-07-10 14:38:38.902152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.651 [2024-07-10 14:38:38.911450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.651 [2024-07-10 14:38:38.911955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.651 [2024-07-10 14:38:38.911996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.651 [2024-07-10 14:38:38.912021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.651 [2024-07-10 14:38:38.912302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.651 [2024-07-10 14:38:38.912598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.651 [2024-07-10 14:38:38.912629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.651 [2024-07-10 14:38:38.912650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.651 [2024-07-10 14:38:38.916696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.651 [2024-07-10 14:38:38.925974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.651 [2024-07-10 14:38:38.926469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.651 [2024-07-10 14:38:38.926510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.651 [2024-07-10 14:38:38.926536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.651 [2024-07-10 14:38:38.926816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.651 [2024-07-10 14:38:38.927098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.651 [2024-07-10 14:38:38.927129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.651 [2024-07-10 14:38:38.927150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.651 [2024-07-10 14:38:38.931198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.651 [2024-07-10 14:38:38.940487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.651 [2024-07-10 14:38:38.940983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.651 [2024-07-10 14:38:38.941024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.651 [2024-07-10 14:38:38.941049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.651 [2024-07-10 14:38:38.941330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.651 [2024-07-10 14:38:38.941633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.651 [2024-07-10 14:38:38.941676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.651 [2024-07-10 14:38:38.941697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.651 [2024-07-10 14:38:38.945742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.651 [2024-07-10 14:38:38.955020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.651 [2024-07-10 14:38:38.955546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.651 [2024-07-10 14:38:38.955588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.651 [2024-07-10 14:38:38.955614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.651 [2024-07-10 14:38:38.955896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.651 [2024-07-10 14:38:38.956179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.651 [2024-07-10 14:38:38.956210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.651 [2024-07-10 14:38:38.956231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.651 [2024-07-10 14:38:38.960277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.651 [2024-07-10 14:38:38.969563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.651 [2024-07-10 14:38:38.970043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.651 [2024-07-10 14:38:38.970084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.651 [2024-07-10 14:38:38.970109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.651 [2024-07-10 14:38:38.970391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.651 [2024-07-10 14:38:38.970683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.652 [2024-07-10 14:38:38.970715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.652 [2024-07-10 14:38:38.970737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.652 [2024-07-10 14:38:38.974788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.652 [2024-07-10 14:38:38.984068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.652 [2024-07-10 14:38:38.984564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.652 [2024-07-10 14:38:38.984605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.652 [2024-07-10 14:38:38.984630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.652 [2024-07-10 14:38:38.984911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.652 [2024-07-10 14:38:38.985194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.652 [2024-07-10 14:38:38.985225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.652 [2024-07-10 14:38:38.985245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.652 [2024-07-10 14:38:38.989298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.652 [2024-07-10 14:38:38.998583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.652 [2024-07-10 14:38:38.999086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.652 [2024-07-10 14:38:38.999127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.652 [2024-07-10 14:38:38.999152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.652 [2024-07-10 14:38:38.999446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.652 [2024-07-10 14:38:38.999731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.652 [2024-07-10 14:38:38.999762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.652 [2024-07-10 14:38:38.999783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.652 [2024-07-10 14:38:39.003833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.652 [2024-07-10 14:38:39.013155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.652 [2024-07-10 14:38:39.013628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.652 [2024-07-10 14:38:39.013670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.652 [2024-07-10 14:38:39.013704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.652 [2024-07-10 14:38:39.013988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.652 [2024-07-10 14:38:39.014272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.652 [2024-07-10 14:38:39.014303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.652 [2024-07-10 14:38:39.014325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.652 [2024-07-10 14:38:39.018438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.652 [2024-07-10 14:38:39.027754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.652 [2024-07-10 14:38:39.028324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.652 [2024-07-10 14:38:39.028366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.652 [2024-07-10 14:38:39.028392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.652 [2024-07-10 14:38:39.028707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.652 [2024-07-10 14:38:39.028999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.652 [2024-07-10 14:38:39.029031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.652 [2024-07-10 14:38:39.029053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.652 [2024-07-10 14:38:39.033219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.652 [2024-07-10 14:38:39.042443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.652 [2024-07-10 14:38:39.042927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.652 [2024-07-10 14:38:39.042983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.652 [2024-07-10 14:38:39.043009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.652 [2024-07-10 14:38:39.043299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.652 [2024-07-10 14:38:39.043617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.652 [2024-07-10 14:38:39.043649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.652 [2024-07-10 14:38:39.043671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.652 [2024-07-10 14:38:39.047914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.652 [2024-07-10 14:38:39.057154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.652 [2024-07-10 14:38:39.057663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.652 [2024-07-10 14:38:39.057704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.652 [2024-07-10 14:38:39.057738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.652 [2024-07-10 14:38:39.058042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.652 [2024-07-10 14:38:39.058337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.652 [2024-07-10 14:38:39.058378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.652 [2024-07-10 14:38:39.058400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.652 [2024-07-10 14:38:39.062631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.652 [2024-07-10 14:38:39.071808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.652 [2024-07-10 14:38:39.072335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.652 [2024-07-10 14:38:39.072384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.652 [2024-07-10 14:38:39.072409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.652 [2024-07-10 14:38:39.072714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.652 [2024-07-10 14:38:39.073006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.652 [2024-07-10 14:38:39.073045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.652 [2024-07-10 14:38:39.073067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.652 [2024-07-10 14:38:39.077250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.652 [2024-07-10 14:38:39.086243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.652 [2024-07-10 14:38:39.086759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.652 [2024-07-10 14:38:39.086801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.652 [2024-07-10 14:38:39.086826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.652 [2024-07-10 14:38:39.087108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.652 [2024-07-10 14:38:39.087398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.652 [2024-07-10 14:38:39.087452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.652 [2024-07-10 14:38:39.087475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.653 [2024-07-10 14:38:39.091581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.653 [2024-07-10 14:38:39.100827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.653 [2024-07-10 14:38:39.101442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.653 [2024-07-10 14:38:39.101499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.653 [2024-07-10 14:38:39.101525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.653 [2024-07-10 14:38:39.101822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.653 [2024-07-10 14:38:39.102118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.653 [2024-07-10 14:38:39.102149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.653 [2024-07-10 14:38:39.102170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.653 [2024-07-10 14:38:39.106322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.653 [2024-07-10 14:38:39.115265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.653 [2024-07-10 14:38:39.115800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.653 [2024-07-10 14:38:39.115841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.653 [2024-07-10 14:38:39.115866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.653 [2024-07-10 14:38:39.116149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.653 [2024-07-10 14:38:39.116443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.653 [2024-07-10 14:38:39.116475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.653 [2024-07-10 14:38:39.116497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.653 [2024-07-10 14:38:39.120617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.653 [2024-07-10 14:38:39.129947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.653 [2024-07-10 14:38:39.130465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.653 [2024-07-10 14:38:39.130507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.653 [2024-07-10 14:38:39.130534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.653 [2024-07-10 14:38:39.130826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.653 [2024-07-10 14:38:39.131165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.653 [2024-07-10 14:38:39.131198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.653 [2024-07-10 14:38:39.131220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.912 [2024-07-10 14:38:39.135441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.912 [2024-07-10 14:38:39.144393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.912 [2024-07-10 14:38:39.144931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.912 [2024-07-10 14:38:39.144973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.912 [2024-07-10 14:38:39.144999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.912 [2024-07-10 14:38:39.145283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.912 [2024-07-10 14:38:39.145583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.912 [2024-07-10 14:38:39.145616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.912 [2024-07-10 14:38:39.145637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.912 [2024-07-10 14:38:39.149731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.912 [2024-07-10 14:38:39.158903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.912 [2024-07-10 14:38:39.159431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.912 [2024-07-10 14:38:39.159473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.912 [2024-07-10 14:38:39.159500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.912 [2024-07-10 14:38:39.159785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.912 [2024-07-10 14:38:39.160070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.912 [2024-07-10 14:38:39.160101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.912 [2024-07-10 14:38:39.160123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.912 [2024-07-10 14:38:39.164226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.912 [2024-07-10 14:38:39.173385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.912 [2024-07-10 14:38:39.173902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.912 [2024-07-10 14:38:39.173944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.912 [2024-07-10 14:38:39.173970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.912 [2024-07-10 14:38:39.174254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.912 [2024-07-10 14:38:39.174557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.912 [2024-07-10 14:38:39.174588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.912 [2024-07-10 14:38:39.174610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.912 [2024-07-10 14:38:39.178701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.912 [2024-07-10 14:38:39.187808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.912 [2024-07-10 14:38:39.188328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.912 [2024-07-10 14:38:39.188376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.912 [2024-07-10 14:38:39.188403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.912 [2024-07-10 14:38:39.188696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.912 [2024-07-10 14:38:39.188981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.912 [2024-07-10 14:38:39.189012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.912 [2024-07-10 14:38:39.189034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.912 [2024-07-10 14:38:39.193113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.912 [2024-07-10 14:38:39.202226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.912 [2024-07-10 14:38:39.202734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.912 [2024-07-10 14:38:39.202775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.912 [2024-07-10 14:38:39.202801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.912 [2024-07-10 14:38:39.203084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.912 [2024-07-10 14:38:39.203369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.912 [2024-07-10 14:38:39.203399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.912 [2024-07-10 14:38:39.203421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.912 [2024-07-10 14:38:39.207506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.912 [2024-07-10 14:38:39.216605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.912 [2024-07-10 14:38:39.217084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.912 [2024-07-10 14:38:39.217125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.912 [2024-07-10 14:38:39.217150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.912 [2024-07-10 14:38:39.217444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.912 [2024-07-10 14:38:39.217730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.912 [2024-07-10 14:38:39.217761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.912 [2024-07-10 14:38:39.217783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.912 [2024-07-10 14:38:39.221860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.912 [2024-07-10 14:38:39.230979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.912 [2024-07-10 14:38:39.231555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.912 [2024-07-10 14:38:39.231597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.912 [2024-07-10 14:38:39.231622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.912 [2024-07-10 14:38:39.231905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.912 [2024-07-10 14:38:39.232197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.912 [2024-07-10 14:38:39.232229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.912 [2024-07-10 14:38:39.232251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.912 [2024-07-10 14:38:39.236331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.912 [2024-07-10 14:38:39.245441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.912 [2024-07-10 14:38:39.245893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.912 [2024-07-10 14:38:39.245935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.912 [2024-07-10 14:38:39.245961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.912 [2024-07-10 14:38:39.246244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.912 [2024-07-10 14:38:39.246544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.912 [2024-07-10 14:38:39.246576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.912 [2024-07-10 14:38:39.246598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.912 [2024-07-10 14:38:39.250666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.912 [2024-07-10 14:38:39.260017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.912 [2024-07-10 14:38:39.260517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.912 [2024-07-10 14:38:39.260568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.912 [2024-07-10 14:38:39.260594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.912 [2024-07-10 14:38:39.260878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.912 [2024-07-10 14:38:39.261163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.912 [2024-07-10 14:38:39.261194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.912 [2024-07-10 14:38:39.261216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.912 [2024-07-10 14:38:39.265322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.912 [2024-07-10 14:38:39.274414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.912 [2024-07-10 14:38:39.274929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.912 [2024-07-10 14:38:39.274970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.912 [2024-07-10 14:38:39.274995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.912 [2024-07-10 14:38:39.275276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.912 [2024-07-10 14:38:39.275571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.912 [2024-07-10 14:38:39.275602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.912 [2024-07-10 14:38:39.275630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.912 [2024-07-10 14:38:39.279687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.912 [2024-07-10 14:38:39.288986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.912 [2024-07-10 14:38:39.289489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.912 [2024-07-10 14:38:39.289529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.912 [2024-07-10 14:38:39.289555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.912 [2024-07-10 14:38:39.289836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.912 [2024-07-10 14:38:39.290120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.912 [2024-07-10 14:38:39.290151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.912 [2024-07-10 14:38:39.290173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.912 [2024-07-10 14:38:39.294240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.912 [2024-07-10 14:38:39.303549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.912 [2024-07-10 14:38:39.304061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.912 [2024-07-10 14:38:39.304102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.912 [2024-07-10 14:38:39.304128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.912 [2024-07-10 14:38:39.304408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.912 [2024-07-10 14:38:39.304704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.912 [2024-07-10 14:38:39.304735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.912 [2024-07-10 14:38:39.304756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.912 [2024-07-10 14:38:39.308811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.912 [2024-07-10 14:38:39.317893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.912 [2024-07-10 14:38:39.318362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.912 [2024-07-10 14:38:39.318403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.912 [2024-07-10 14:38:39.318439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.912 [2024-07-10 14:38:39.318724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.912 [2024-07-10 14:38:39.319007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.912 [2024-07-10 14:38:39.319038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.912 [2024-07-10 14:38:39.319059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.912 [2024-07-10 14:38:39.323121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.912 [2024-07-10 14:38:39.332459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.912 [2024-07-10 14:38:39.332924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.912 [2024-07-10 14:38:39.332970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.912 [2024-07-10 14:38:39.332997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.912 [2024-07-10 14:38:39.333279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.912 [2024-07-10 14:38:39.333578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.912 [2024-07-10 14:38:39.333609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.912 [2024-07-10 14:38:39.333631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.912 [2024-07-10 14:38:39.337684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.912 [2024-07-10 14:38:39.346986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.912 [2024-07-10 14:38:39.347491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.912 [2024-07-10 14:38:39.347532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.912 [2024-07-10 14:38:39.347557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.912 [2024-07-10 14:38:39.347839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.912 [2024-07-10 14:38:39.348123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.912 [2024-07-10 14:38:39.348154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.912 [2024-07-10 14:38:39.348177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.912 [2024-07-10 14:38:39.352224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.912 [2024-07-10 14:38:39.361516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.912 [2024-07-10 14:38:39.361997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.912 [2024-07-10 14:38:39.362037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.912 [2024-07-10 14:38:39.362062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.912 [2024-07-10 14:38:39.362344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.912 [2024-07-10 14:38:39.362645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.912 [2024-07-10 14:38:39.362677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.912 [2024-07-10 14:38:39.362698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.912 [2024-07-10 14:38:39.366737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.912 [2024-07-10 14:38:39.376027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.912 [2024-07-10 14:38:39.376543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.912 [2024-07-10 14:38:39.376585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.912 [2024-07-10 14:38:39.376610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.912 [2024-07-10 14:38:39.376891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.912 [2024-07-10 14:38:39.377179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.912 [2024-07-10 14:38:39.377210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.912 [2024-07-10 14:38:39.377232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.912 [2024-07-10 14:38:39.381283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.912 [2024-07-10 14:38:39.390752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.912 [2024-07-10 14:38:39.391246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.912 [2024-07-10 14:38:39.391302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.912 [2024-07-10 14:38:39.391336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:29.912 [2024-07-10 14:38:39.391649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.171 [2024-07-10 14:38:39.391934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.171 [2024-07-10 14:38:39.391965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.171 [2024-07-10 14:38:39.391987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.171 [2024-07-10 14:38:39.396189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.171 [2024-07-10 14:38:39.405226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.171 [2024-07-10 14:38:39.405768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.171 [2024-07-10 14:38:39.405810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.171 [2024-07-10 14:38:39.405836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.171 [2024-07-10 14:38:39.406120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.171 [2024-07-10 14:38:39.406403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.171 [2024-07-10 14:38:39.406445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.171 [2024-07-10 14:38:39.406469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.171 [2024-07-10 14:38:39.410519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.171 [2024-07-10 14:38:39.419787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.171 [2024-07-10 14:38:39.420302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.171 [2024-07-10 14:38:39.420342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.171 [2024-07-10 14:38:39.420368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.171 [2024-07-10 14:38:39.420660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.171 [2024-07-10 14:38:39.420943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.171 [2024-07-10 14:38:39.420974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.171 [2024-07-10 14:38:39.421002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.171 [2024-07-10 14:38:39.425048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.171 [2024-07-10 14:38:39.434337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.171 [2024-07-10 14:38:39.434856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.171 [2024-07-10 14:38:39.434897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.171 [2024-07-10 14:38:39.434923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.171 [2024-07-10 14:38:39.435202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.171 [2024-07-10 14:38:39.435500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.171 [2024-07-10 14:38:39.435531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.171 [2024-07-10 14:38:39.435553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.171 [2024-07-10 14:38:39.439598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.171 [2024-07-10 14:38:39.448870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.171 [2024-07-10 14:38:39.449362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.171 [2024-07-10 14:38:39.449402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.171 [2024-07-10 14:38:39.449437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.171 [2024-07-10 14:38:39.449720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.171 [2024-07-10 14:38:39.450003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.171 [2024-07-10 14:38:39.450033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.171 [2024-07-10 14:38:39.450055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.171 [2024-07-10 14:38:39.454136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.171 [2024-07-10 14:38:39.463414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.171 [2024-07-10 14:38:39.463930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.171 [2024-07-10 14:38:39.463971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.171 [2024-07-10 14:38:39.463997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.172 [2024-07-10 14:38:39.464277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.172 [2024-07-10 14:38:39.464574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.172 [2024-07-10 14:38:39.464605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.172 [2024-07-10 14:38:39.464627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.172 [2024-07-10 14:38:39.468667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.172 [2024-07-10 14:38:39.477947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.172 [2024-07-10 14:38:39.478446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.172 [2024-07-10 14:38:39.478496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.172 [2024-07-10 14:38:39.478521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.172 [2024-07-10 14:38:39.478803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.172 [2024-07-10 14:38:39.479086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.172 [2024-07-10 14:38:39.479117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.172 [2024-07-10 14:38:39.479139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.172 [2024-07-10 14:38:39.483178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.172 [2024-07-10 14:38:39.492451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.172 [2024-07-10 14:38:39.492972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.172 [2024-07-10 14:38:39.493013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.172 [2024-07-10 14:38:39.493039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.172 [2024-07-10 14:38:39.493319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.172 [2024-07-10 14:38:39.493615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.172 [2024-07-10 14:38:39.493646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.172 [2024-07-10 14:38:39.493668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.172 [2024-07-10 14:38:39.497708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.172 [2024-07-10 14:38:39.506991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.172 [2024-07-10 14:38:39.507482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.172 [2024-07-10 14:38:39.507523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.172 [2024-07-10 14:38:39.507549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.172 [2024-07-10 14:38:39.507831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.172 [2024-07-10 14:38:39.508114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.172 [2024-07-10 14:38:39.508145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.172 [2024-07-10 14:38:39.508167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.172 [2024-07-10 14:38:39.512420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.172 [2024-07-10 14:38:39.521480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.172 [2024-07-10 14:38:39.521985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.172 [2024-07-10 14:38:39.522032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.172 [2024-07-10 14:38:39.522059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.172 [2024-07-10 14:38:39.522347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.172 [2024-07-10 14:38:39.522642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.172 [2024-07-10 14:38:39.522673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.172 [2024-07-10 14:38:39.522694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.172 [2024-07-10 14:38:39.526733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.172 [2024-07-10 14:38:39.536008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.172 [2024-07-10 14:38:39.536485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.172 [2024-07-10 14:38:39.536526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.172 [2024-07-10 14:38:39.536552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.172 [2024-07-10 14:38:39.536834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.172 [2024-07-10 14:38:39.537118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.172 [2024-07-10 14:38:39.537150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.172 [2024-07-10 14:38:39.537172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.172 [2024-07-10 14:38:39.541230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.172 [2024-07-10 14:38:39.550528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.172 [2024-07-10 14:38:39.551043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.172 [2024-07-10 14:38:39.551083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.172 [2024-07-10 14:38:39.551109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.172 [2024-07-10 14:38:39.551390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.172 [2024-07-10 14:38:39.551682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.172 [2024-07-10 14:38:39.551721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.172 [2024-07-10 14:38:39.551743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.172 [2024-07-10 14:38:39.555783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.172 [2024-07-10 14:38:39.565069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.172 [2024-07-10 14:38:39.565577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.172 [2024-07-10 14:38:39.565618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.172 [2024-07-10 14:38:39.565658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.172 [2024-07-10 14:38:39.565940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.172 [2024-07-10 14:38:39.566221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.172 [2024-07-10 14:38:39.566252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.172 [2024-07-10 14:38:39.566282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.172 [2024-07-10 14:38:39.570328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.172 [2024-07-10 14:38:39.579610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.172 [2024-07-10 14:38:39.580284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.172 [2024-07-10 14:38:39.580324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.172 [2024-07-10 14:38:39.580350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.172 [2024-07-10 14:38:39.580641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.172 [2024-07-10 14:38:39.580924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.172 [2024-07-10 14:38:39.580955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.172 [2024-07-10 14:38:39.580977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.172 [2024-07-10 14:38:39.585021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.172 [2024-07-10 14:38:39.594076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.172 [2024-07-10 14:38:39.594559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.173 [2024-07-10 14:38:39.594601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.173 [2024-07-10 14:38:39.594627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.173 [2024-07-10 14:38:39.594909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.173 [2024-07-10 14:38:39.595192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.173 [2024-07-10 14:38:39.595223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.173 [2024-07-10 14:38:39.595245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.173 [2024-07-10 14:38:39.599290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.173 [2024-07-10 14:38:39.608582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.173 [2024-07-10 14:38:39.609093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.173 [2024-07-10 14:38:39.609133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.173 [2024-07-10 14:38:39.609159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.173 [2024-07-10 14:38:39.609450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.173 [2024-07-10 14:38:39.609734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.173 [2024-07-10 14:38:39.609765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.173 [2024-07-10 14:38:39.609787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.173 [2024-07-10 14:38:39.613830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.173 [2024-07-10 14:38:39.623094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.173 [2024-07-10 14:38:39.623610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.173 [2024-07-10 14:38:39.623651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.173 [2024-07-10 14:38:39.623676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.173 [2024-07-10 14:38:39.623957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.173 [2024-07-10 14:38:39.624239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.173 [2024-07-10 14:38:39.624270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.173 [2024-07-10 14:38:39.624292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.173 [2024-07-10 14:38:39.628337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.173 [2024-07-10 14:38:39.637615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.173 [2024-07-10 14:38:39.638160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.173 [2024-07-10 14:38:39.638201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.173 [2024-07-10 14:38:39.638226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.173 [2024-07-10 14:38:39.638527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.173 [2024-07-10 14:38:39.638810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.173 [2024-07-10 14:38:39.638841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.173 [2024-07-10 14:38:39.638863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.173 [2024-07-10 14:38:39.642908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.433 [2024-07-10 14:38:39.652250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.433 [2024-07-10 14:38:39.652955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-07-10 14:38:39.653024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.433 [2024-07-10 14:38:39.653051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.433 [2024-07-10 14:38:39.653332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.433 [2024-07-10 14:38:39.653626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.433 [2024-07-10 14:38:39.653658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.433 [2024-07-10 14:38:39.653680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.433 [2024-07-10 14:38:39.657850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.433 [2024-07-10 14:38:39.666649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.433 [2024-07-10 14:38:39.667252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-07-10 14:38:39.667320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.433 [2024-07-10 14:38:39.667346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.433 [2024-07-10 14:38:39.667644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.433 [2024-07-10 14:38:39.667927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.433 [2024-07-10 14:38:39.667959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.433 [2024-07-10 14:38:39.667981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.433 [2024-07-10 14:38:39.672026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.433 [2024-07-10 14:38:39.681077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.433 [2024-07-10 14:38:39.681564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-07-10 14:38:39.681605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.433 [2024-07-10 14:38:39.681631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.433 [2024-07-10 14:38:39.681913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.433 [2024-07-10 14:38:39.682195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.433 [2024-07-10 14:38:39.682226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.433 [2024-07-10 14:38:39.682248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.433 [2024-07-10 14:38:39.686296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.433 [2024-07-10 14:38:39.695583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.433 [2024-07-10 14:38:39.696147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-07-10 14:38:39.696189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.433 [2024-07-10 14:38:39.696214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.433 [2024-07-10 14:38:39.696511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.433 [2024-07-10 14:38:39.696794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.433 [2024-07-10 14:38:39.696825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.433 [2024-07-10 14:38:39.696847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.433 [2024-07-10 14:38:39.700886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.433 [2024-07-10 14:38:39.709986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.433 [2024-07-10 14:38:39.710479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-07-10 14:38:39.710520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.433 [2024-07-10 14:38:39.710546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.433 [2024-07-10 14:38:39.710827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.433 [2024-07-10 14:38:39.711110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.433 [2024-07-10 14:38:39.711141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.433 [2024-07-10 14:38:39.711169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.433 [2024-07-10 14:38:39.715212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.434 [2024-07-10 14:38:39.724501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.434 [2024-07-10 14:38:39.725014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-07-10 14:38:39.725054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.434 [2024-07-10 14:38:39.725080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.434 [2024-07-10 14:38:39.725360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.434 [2024-07-10 14:38:39.725654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.434 [2024-07-10 14:38:39.725687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.434 [2024-07-10 14:38:39.725709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.434 [2024-07-10 14:38:39.729750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.434 [2024-07-10 14:38:39.739032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.434 [2024-07-10 14:38:39.739502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-07-10 14:38:39.739543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.434 [2024-07-10 14:38:39.739568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.434 [2024-07-10 14:38:39.739848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.434 [2024-07-10 14:38:39.740130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.434 [2024-07-10 14:38:39.740161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.434 [2024-07-10 14:38:39.740183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.434 [2024-07-10 14:38:39.744228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.434 [2024-07-10 14:38:39.753507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.434 [2024-07-10 14:38:39.753953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-07-10 14:38:39.753994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.434 [2024-07-10 14:38:39.754019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.434 [2024-07-10 14:38:39.754300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.434 [2024-07-10 14:38:39.754597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.434 [2024-07-10 14:38:39.754628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.434 [2024-07-10 14:38:39.754649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.434 [2024-07-10 14:38:39.758692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.434 [2024-07-10 14:38:39.767974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.434 [2024-07-10 14:38:39.768493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-07-10 14:38:39.768534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.434 [2024-07-10 14:38:39.768559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.434 [2024-07-10 14:38:39.768840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.434 [2024-07-10 14:38:39.769135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.434 [2024-07-10 14:38:39.769166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.434 [2024-07-10 14:38:39.769187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.434 [2024-07-10 14:38:39.773262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.434 [2024-07-10 14:38:39.782543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.434 [2024-07-10 14:38:39.783052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-07-10 14:38:39.783092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.434 [2024-07-10 14:38:39.783117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.434 [2024-07-10 14:38:39.783398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.434 [2024-07-10 14:38:39.783690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.434 [2024-07-10 14:38:39.783721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.434 [2024-07-10 14:38:39.783743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.434 [2024-07-10 14:38:39.787790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.434 [2024-07-10 14:38:39.797071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.434 [2024-07-10 14:38:39.797543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-07-10 14:38:39.797586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.434 [2024-07-10 14:38:39.797611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.434 [2024-07-10 14:38:39.797892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.434 [2024-07-10 14:38:39.798174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.434 [2024-07-10 14:38:39.798205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.434 [2024-07-10 14:38:39.798227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.434 [2024-07-10 14:38:39.802284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.434 [2024-07-10 14:38:39.811582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.434 [2024-07-10 14:38:39.812100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-07-10 14:38:39.812141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.434 [2024-07-10 14:38:39.812167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.434 [2024-07-10 14:38:39.812465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.434 [2024-07-10 14:38:39.812749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.434 [2024-07-10 14:38:39.812789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.434 [2024-07-10 14:38:39.812810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.434 [2024-07-10 14:38:39.816864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.434 [2024-07-10 14:38:39.826162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.434 [2024-07-10 14:38:39.826648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-07-10 14:38:39.826690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.434 [2024-07-10 14:38:39.826716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.434 [2024-07-10 14:38:39.826996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.434 [2024-07-10 14:38:39.827279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.434 [2024-07-10 14:38:39.827310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.434 [2024-07-10 14:38:39.827331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.434 [2024-07-10 14:38:39.831377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.434 [2024-07-10 14:38:39.840673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.434 [2024-07-10 14:38:39.841286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-07-10 14:38:39.841347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.434 [2024-07-10 14:38:39.841372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.434 [2024-07-10 14:38:39.841663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.434 [2024-07-10 14:38:39.841944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.434 [2024-07-10 14:38:39.841975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.434 [2024-07-10 14:38:39.841996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.434 [2024-07-10 14:38:39.846040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.434 [2024-07-10 14:38:39.855094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.435 [2024-07-10 14:38:39.855594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-07-10 14:38:39.855635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.435 [2024-07-10 14:38:39.855661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.435 [2024-07-10 14:38:39.855942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.435 [2024-07-10 14:38:39.856225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.435 [2024-07-10 14:38:39.856256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.435 [2024-07-10 14:38:39.856283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.435 [2024-07-10 14:38:39.860332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.435 [2024-07-10 14:38:39.869618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.435 [2024-07-10 14:38:39.870109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-07-10 14:38:39.870149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.435 [2024-07-10 14:38:39.870175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.435 [2024-07-10 14:38:39.870469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.435 [2024-07-10 14:38:39.870751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.435 [2024-07-10 14:38:39.870782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.435 [2024-07-10 14:38:39.870804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.435 [2024-07-10 14:38:39.874840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.435 [2024-07-10 14:38:39.884113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.435 [2024-07-10 14:38:39.884641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-07-10 14:38:39.884682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.435 [2024-07-10 14:38:39.884708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.435 [2024-07-10 14:38:39.884988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.435 [2024-07-10 14:38:39.885270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.435 [2024-07-10 14:38:39.885301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.435 [2024-07-10 14:38:39.885322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.435 [2024-07-10 14:38:39.889374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.435 [2024-07-10 14:38:39.898659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.435 [2024-07-10 14:38:39.899125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-07-10 14:38:39.899166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.435 [2024-07-10 14:38:39.899191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.435 [2024-07-10 14:38:39.899484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.435 [2024-07-10 14:38:39.899766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.435 [2024-07-10 14:38:39.899797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.435 [2024-07-10 14:38:39.899818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.435 [2024-07-10 14:38:39.903856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.695 [2024-07-10 14:38:39.913355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.695 [2024-07-10 14:38:39.913889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.695 [2024-07-10 14:38:39.913932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.695 [2024-07-10 14:38:39.913958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.695 [2024-07-10 14:38:39.914255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.695 [2024-07-10 14:38:39.914552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.695 [2024-07-10 14:38:39.914584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.695 [2024-07-10 14:38:39.914605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.695 [2024-07-10 14:38:39.918767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.695 [2024-07-10 14:38:39.927840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.695 [2024-07-10 14:38:39.928357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.695 [2024-07-10 14:38:39.928398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.695 [2024-07-10 14:38:39.928423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.695 [2024-07-10 14:38:39.928717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.695 [2024-07-10 14:38:39.929001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.695 [2024-07-10 14:38:39.929032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.695 [2024-07-10 14:38:39.929054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.695 [2024-07-10 14:38:39.933097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.695 [2024-07-10 14:38:39.942367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.695 [2024-07-10 14:38:39.943016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.695 [2024-07-10 14:38:39.943083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.695 [2024-07-10 14:38:39.943108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.695 [2024-07-10 14:38:39.943390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.695 [2024-07-10 14:38:39.943682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.695 [2024-07-10 14:38:39.943713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.695 [2024-07-10 14:38:39.943735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.695 [2024-07-10 14:38:39.947779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.695 [2024-07-10 14:38:39.956816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.695 [2024-07-10 14:38:39.957482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.695 [2024-07-10 14:38:39.957524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.695 [2024-07-10 14:38:39.957549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.695 [2024-07-10 14:38:39.957835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.695 [2024-07-10 14:38:39.958118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.695 [2024-07-10 14:38:39.958149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.695 [2024-07-10 14:38:39.958171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.695 [2024-07-10 14:38:39.962206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.695 [2024-07-10 14:38:39.971257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.695 [2024-07-10 14:38:39.971763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.695 [2024-07-10 14:38:39.971804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.695 [2024-07-10 14:38:39.971829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.695 [2024-07-10 14:38:39.972109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.695 [2024-07-10 14:38:39.972394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.695 [2024-07-10 14:38:39.972435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.695 [2024-07-10 14:38:39.972473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.695 [2024-07-10 14:38:39.976520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.695 [2024-07-10 14:38:39.985799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.695 [2024-07-10 14:38:39.986249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.695 [2024-07-10 14:38:39.986296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.695 [2024-07-10 14:38:39.986323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.695 [2024-07-10 14:38:39.986619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.695 [2024-07-10 14:38:39.986903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.695 [2024-07-10 14:38:39.986933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.695 [2024-07-10 14:38:39.986955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.695 [2024-07-10 14:38:39.990997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.695 [2024-07-10 14:38:40.000293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.695 [2024-07-10 14:38:40.000920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.695 [2024-07-10 14:38:40.000975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.695 [2024-07-10 14:38:40.001013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.695 [2024-07-10 14:38:40.001393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.695 [2024-07-10 14:38:40.001790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.695 [2024-07-10 14:38:40.001838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.695 [2024-07-10 14:38:40.001870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.695 [2024-07-10 14:38:40.007019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.695 [2024-07-10 14:38:40.014998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.695 [2024-07-10 14:38:40.015541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.695 [2024-07-10 14:38:40.015586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.695 [2024-07-10 14:38:40.015614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.695 [2024-07-10 14:38:40.015907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.695 [2024-07-10 14:38:40.016197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.696 [2024-07-10 14:38:40.016229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.696 [2024-07-10 14:38:40.016252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.696 [2024-07-10 14:38:40.020396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.696 [2024-07-10 14:38:40.029723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.696 [2024-07-10 14:38:40.030250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.696 [2024-07-10 14:38:40.030292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.696 [2024-07-10 14:38:40.030318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.696 [2024-07-10 14:38:40.030618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.696 [2024-07-10 14:38:40.030907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.696 [2024-07-10 14:38:40.030939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.696 [2024-07-10 14:38:40.030961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.696 [2024-07-10 14:38:40.035114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.696 [2024-07-10 14:38:40.044191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.696 [2024-07-10 14:38:40.044723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.696 [2024-07-10 14:38:40.044765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.696 [2024-07-10 14:38:40.044791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.696 [2024-07-10 14:38:40.045079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.696 [2024-07-10 14:38:40.045370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.696 [2024-07-10 14:38:40.045401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.696 [2024-07-10 14:38:40.045433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.696 [2024-07-10 14:38:40.049611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.696 [2024-07-10 14:38:40.058715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.696 [2024-07-10 14:38:40.059227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.696 [2024-07-10 14:38:40.059269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.696 [2024-07-10 14:38:40.059295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.696 [2024-07-10 14:38:40.059597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.696 [2024-07-10 14:38:40.059889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.696 [2024-07-10 14:38:40.059920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.696 [2024-07-10 14:38:40.059943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.696 [2024-07-10 14:38:40.064110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.696 [2024-07-10 14:38:40.073183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.696 [2024-07-10 14:38:40.073710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.696 [2024-07-10 14:38:40.073752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.696 [2024-07-10 14:38:40.073778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.696 [2024-07-10 14:38:40.074064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.696 [2024-07-10 14:38:40.074352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.696 [2024-07-10 14:38:40.074383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.696 [2024-07-10 14:38:40.074405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.696 [2024-07-10 14:38:40.078548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.696 [2024-07-10 14:38:40.087756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.696 [2024-07-10 14:38:40.088262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.696 [2024-07-10 14:38:40.088303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.696 [2024-07-10 14:38:40.088329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.696 [2024-07-10 14:38:40.088625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.696 [2024-07-10 14:38:40.088912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.696 [2024-07-10 14:38:40.088943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.696 [2024-07-10 14:38:40.088965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.696 [2024-07-10 14:38:40.093064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.696 [2024-07-10 14:38:40.102246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.696 [2024-07-10 14:38:40.102757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.696 [2024-07-10 14:38:40.102798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.696 [2024-07-10 14:38:40.102823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.696 [2024-07-10 14:38:40.103114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.696 [2024-07-10 14:38:40.103400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.696 [2024-07-10 14:38:40.103441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.696 [2024-07-10 14:38:40.103465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.696 [2024-07-10 14:38:40.107542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.696 [2024-07-10 14:38:40.116671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.696 [2024-07-10 14:38:40.117169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.696 [2024-07-10 14:38:40.117210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.696 [2024-07-10 14:38:40.117235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.696 [2024-07-10 14:38:40.117534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.696 [2024-07-10 14:38:40.117821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.696 [2024-07-10 14:38:40.117852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.696 [2024-07-10 14:38:40.117874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.696 [2024-07-10 14:38:40.121957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.696 [2024-07-10 14:38:40.131123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.696 [2024-07-10 14:38:40.131652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.696 [2024-07-10 14:38:40.131693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.696 [2024-07-10 14:38:40.131729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.696 [2024-07-10 14:38:40.132011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.696 [2024-07-10 14:38:40.132296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.696 [2024-07-10 14:38:40.132327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.696 [2024-07-10 14:38:40.132348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.696 [2024-07-10 14:38:40.136463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.696 [2024-07-10 14:38:40.145664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.696 [2024-07-10 14:38:40.146242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.696 [2024-07-10 14:38:40.146282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.696 [2024-07-10 14:38:40.146308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.696 [2024-07-10 14:38:40.146617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.696 [2024-07-10 14:38:40.146909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.696 [2024-07-10 14:38:40.146946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.696 [2024-07-10 14:38:40.146969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.696 [2024-07-10 14:38:40.151065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.696 [2024-07-10 14:38:40.160082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.696 [2024-07-10 14:38:40.160591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.696 [2024-07-10 14:38:40.160633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.696 [2024-07-10 14:38:40.160659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.696 [2024-07-10 14:38:40.160942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.696 [2024-07-10 14:38:40.161226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.696 [2024-07-10 14:38:40.161257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.696 [2024-07-10 14:38:40.161279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.696 [2024-07-10 14:38:40.165389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.956 [2024-07-10 14:38:40.174840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.956 [2024-07-10 14:38:40.175392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.956 [2024-07-10 14:38:40.175454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.956 [2024-07-10 14:38:40.175491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.956 [2024-07-10 14:38:40.175775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.956 [2024-07-10 14:38:40.176062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.956 [2024-07-10 14:38:40.176093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.956 [2024-07-10 14:38:40.176115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.956 [2024-07-10 14:38:40.180378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.956 [2024-07-10 14:38:40.189370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.956 [2024-07-10 14:38:40.189946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.956 [2024-07-10 14:38:40.189988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.956 [2024-07-10 14:38:40.190013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.956 [2024-07-10 14:38:40.190298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.956 [2024-07-10 14:38:40.190605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.956 [2024-07-10 14:38:40.190638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.956 [2024-07-10 14:38:40.190660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.956 [2024-07-10 14:38:40.194774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.956 [2024-07-10 14:38:40.203763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.956 [2024-07-10 14:38:40.204329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.956 [2024-07-10 14:38:40.204371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.956 [2024-07-10 14:38:40.204396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.956 [2024-07-10 14:38:40.204691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.956 [2024-07-10 14:38:40.204978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.956 [2024-07-10 14:38:40.205009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.956 [2024-07-10 14:38:40.205031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.956 [2024-07-10 14:38:40.209134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.956 [2024-07-10 14:38:40.218339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.956 [2024-07-10 14:38:40.218813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.956 [2024-07-10 14:38:40.218855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.956 [2024-07-10 14:38:40.218880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.956 [2024-07-10 14:38:40.219164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.956 [2024-07-10 14:38:40.219466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.956 [2024-07-10 14:38:40.219497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.956 [2024-07-10 14:38:40.219519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.956 [2024-07-10 14:38:40.223627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.956 [2024-07-10 14:38:40.232834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.956 [2024-07-10 14:38:40.233347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.956 [2024-07-10 14:38:40.233388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.956 [2024-07-10 14:38:40.233414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.956 [2024-07-10 14:38:40.233709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.956 [2024-07-10 14:38:40.233994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.956 [2024-07-10 14:38:40.234026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.956 [2024-07-10 14:38:40.234049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.956 [2024-07-10 14:38:40.238197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.956 [2024-07-10 14:38:40.247436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.957 [2024-07-10 14:38:40.247922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.957 [2024-07-10 14:38:40.247963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.957 [2024-07-10 14:38:40.247998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.957 [2024-07-10 14:38:40.248287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.957 [2024-07-10 14:38:40.248586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.957 [2024-07-10 14:38:40.248618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.957 [2024-07-10 14:38:40.248641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.957 [2024-07-10 14:38:40.252779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.957 [2024-07-10 14:38:40.261982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.957 [2024-07-10 14:38:40.262498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.957 [2024-07-10 14:38:40.262540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.957 [2024-07-10 14:38:40.262566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.957 [2024-07-10 14:38:40.262858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.957 [2024-07-10 14:38:40.263157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.957 [2024-07-10 14:38:40.263187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.957 [2024-07-10 14:38:40.263209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.957 [2024-07-10 14:38:40.267328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.957 [2024-07-10 14:38:40.276505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.957 [2024-07-10 14:38:40.277132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.957 [2024-07-10 14:38:40.277173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.957 [2024-07-10 14:38:40.277199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.957 [2024-07-10 14:38:40.277499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.957 [2024-07-10 14:38:40.277784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.957 [2024-07-10 14:38:40.277815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.957 [2024-07-10 14:38:40.277838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.957 [2024-07-10 14:38:40.281934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.957 [2024-07-10 14:38:40.291040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.957 [2024-07-10 14:38:40.291542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.957 [2024-07-10 14:38:40.291584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.957 [2024-07-10 14:38:40.291609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.957 [2024-07-10 14:38:40.291892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.957 [2024-07-10 14:38:40.292176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.957 [2024-07-10 14:38:40.292212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.957 [2024-07-10 14:38:40.292235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.957 [2024-07-10 14:38:40.296320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.957 [2024-07-10 14:38:40.305413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.957 [2024-07-10 14:38:40.305963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.957 [2024-07-10 14:38:40.306004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.957 [2024-07-10 14:38:40.306030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.957 [2024-07-10 14:38:40.306312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.957 [2024-07-10 14:38:40.306607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.957 [2024-07-10 14:38:40.306639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.957 [2024-07-10 14:38:40.306661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.957 [2024-07-10 14:38:40.310720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.957 [2024-07-10 14:38:40.319804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.957 [2024-07-10 14:38:40.320298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.957 [2024-07-10 14:38:40.320338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.957 [2024-07-10 14:38:40.320363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.957 [2024-07-10 14:38:40.320658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.957 [2024-07-10 14:38:40.320944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.957 [2024-07-10 14:38:40.320975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.957 [2024-07-10 14:38:40.320996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.957 [2024-07-10 14:38:40.325054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.957 [2024-07-10 14:38:40.334367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.957 [2024-07-10 14:38:40.334887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.957 [2024-07-10 14:38:40.334929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.957 [2024-07-10 14:38:40.334954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.957 [2024-07-10 14:38:40.335237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.957 [2024-07-10 14:38:40.335538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.957 [2024-07-10 14:38:40.335570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.957 [2024-07-10 14:38:40.335592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.957 [2024-07-10 14:38:40.339649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.957 [2024-07-10 14:38:40.348732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.957 [2024-07-10 14:38:40.349251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.957 [2024-07-10 14:38:40.349293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.957 [2024-07-10 14:38:40.349318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.957 [2024-07-10 14:38:40.349613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.957 [2024-07-10 14:38:40.349899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.957 [2024-07-10 14:38:40.349930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.957 [2024-07-10 14:38:40.349952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.957 [2024-07-10 14:38:40.354009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.957 [2024-07-10 14:38:40.363301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.957 [2024-07-10 14:38:40.363790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.957 [2024-07-10 14:38:40.363832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.957 [2024-07-10 14:38:40.363858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.957 [2024-07-10 14:38:40.364139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.957 [2024-07-10 14:38:40.364422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.957 [2024-07-10 14:38:40.364465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.957 [2024-07-10 14:38:40.364487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.957 [2024-07-10 14:38:40.368547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.957 [2024-07-10 14:38:40.377835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.957 [2024-07-10 14:38:40.378330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.957 [2024-07-10 14:38:40.378370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.957 [2024-07-10 14:38:40.378395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.957 [2024-07-10 14:38:40.378688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.957 [2024-07-10 14:38:40.378970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.957 [2024-07-10 14:38:40.379001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.957 [2024-07-10 14:38:40.379023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.957 [2024-07-10 14:38:40.383067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.957 [2024-07-10 14:38:40.392343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.957 [2024-07-10 14:38:40.392859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.957 [2024-07-10 14:38:40.392911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.957 [2024-07-10 14:38:40.392943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.957 [2024-07-10 14:38:40.393227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.958 [2024-07-10 14:38:40.393524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.958 [2024-07-10 14:38:40.393556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.958 [2024-07-10 14:38:40.393578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.958 [2024-07-10 14:38:40.397626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.958 [2024-07-10 14:38:40.406903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.958 [2024-07-10 14:38:40.407456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.958 [2024-07-10 14:38:40.407499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.958 [2024-07-10 14:38:40.407524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.958 [2024-07-10 14:38:40.407806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.958 [2024-07-10 14:38:40.408088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.958 [2024-07-10 14:38:40.408118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.958 [2024-07-10 14:38:40.408141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.958 [2024-07-10 14:38:40.412185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.958 [2024-07-10 14:38:40.421473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.958 [2024-07-10 14:38:40.421986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.958 [2024-07-10 14:38:40.422028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:30.958 [2024-07-10 14:38:40.422054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:30.958 [2024-07-10 14:38:40.422336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:30.958 [2024-07-10 14:38:40.422633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.958 [2024-07-10 14:38:40.422665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.958 [2024-07-10 14:38:40.422687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.958 [2024-07-10 14:38:40.426725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.218 [2024-07-10 14:38:40.436001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.218 [2024-07-10 14:38:40.436523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.218 [2024-07-10 14:38:40.436564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.218 [2024-07-10 14:38:40.436589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.218 [2024-07-10 14:38:40.436872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.218 [2024-07-10 14:38:40.437156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.218 [2024-07-10 14:38:40.437192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.218 [2024-07-10 14:38:40.437215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.218 [2024-07-10 14:38:40.441370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.218 [2024-07-10 14:38:40.450442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.218 [2024-07-10 14:38:40.450918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.218 [2024-07-10 14:38:40.450959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.218 [2024-07-10 14:38:40.450985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.218 [2024-07-10 14:38:40.451265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.218 [2024-07-10 14:38:40.451563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.218 [2024-07-10 14:38:40.451596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.218 [2024-07-10 14:38:40.451618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1550106 Killed "${NVMF_APP[@]}" "$@" 00:36:31.218 14:38:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:36:31.218 [2024-07-10 14:38:40.455660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.218 14:38:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:31.218 14:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:31.218 14:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:31.218 14:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:31.218 14:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1551317 00:36:31.218 14:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:31.218 14:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1551317 00:36:31.218 14:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1551317 ']' 00:36:31.218 14:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:31.218 14:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:31.218 14:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:31.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:31.218 14:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:31.218 14:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:31.218 [2024-07-10 14:38:40.464958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.218 [2024-07-10 14:38:40.465476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.218 [2024-07-10 14:38:40.465518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.218 [2024-07-10 14:38:40.465543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.218 [2024-07-10 14:38:40.465825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.218 [2024-07-10 14:38:40.466108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.218 [2024-07-10 14:38:40.466145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.218 [2024-07-10 14:38:40.466168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.218 [2024-07-10 14:38:40.470211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.218 [2024-07-10 14:38:40.479509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.218 [2024-07-10 14:38:40.480000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.218 [2024-07-10 14:38:40.480040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.218 [2024-07-10 14:38:40.480066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.218 [2024-07-10 14:38:40.480348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.218 [2024-07-10 14:38:40.480642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.218 [2024-07-10 14:38:40.480673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.218 [2024-07-10 14:38:40.480695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.218 [2024-07-10 14:38:40.484755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.218 [2024-07-10 14:38:40.493947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.218 [2024-07-10 14:38:40.494486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.218 [2024-07-10 14:38:40.494531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.218 [2024-07-10 14:38:40.494557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.218 [2024-07-10 14:38:40.494846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.218 [2024-07-10 14:38:40.495136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.218 [2024-07-10 14:38:40.495168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.218 [2024-07-10 14:38:40.495192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.218 [2024-07-10 14:38:40.499384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.218 [2024-07-10 14:38:40.508533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.218 [2024-07-10 14:38:40.509162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.218 [2024-07-10 14:38:40.509209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.218 [2024-07-10 14:38:40.509237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.218 [2024-07-10 14:38:40.509544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.218 [2024-07-10 14:38:40.509839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.218 [2024-07-10 14:38:40.509871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.218 [2024-07-10 14:38:40.509897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.218 [2024-07-10 14:38:40.514084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.218 [2024-07-10 14:38:40.523208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.218 [2024-07-10 14:38:40.523726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.218 [2024-07-10 14:38:40.523767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.218 [2024-07-10 14:38:40.523793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.218 [2024-07-10 14:38:40.524081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.218 [2024-07-10 14:38:40.524371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.218 [2024-07-10 14:38:40.524402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.218 [2024-07-10 14:38:40.524436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.218 [2024-07-10 14:38:40.528807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.218 [2024-07-10 14:38:40.537902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.218 [2024-07-10 14:38:40.538437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.218 [2024-07-10 14:38:40.538479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.218 [2024-07-10 14:38:40.538506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.218 [2024-07-10 14:38:40.538794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.218 [2024-07-10 14:38:40.539086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.218 [2024-07-10 14:38:40.539116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.218 [2024-07-10 14:38:40.539139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.218 [2024-07-10 14:38:40.543324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.218 [2024-07-10 14:38:40.552415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.218 [2024-07-10 14:38:40.552942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.218 [2024-07-10 14:38:40.552986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.218 [2024-07-10 14:38:40.553012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.218 [2024-07-10 14:38:40.553300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.219 [2024-07-10 14:38:40.553604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.219 [2024-07-10 14:38:40.553636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.219 [2024-07-10 14:38:40.553658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.219 [2024-07-10 14:38:40.555462] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:36:31.219 [2024-07-10 14:38:40.555606] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:31.219 [2024-07-10 14:38:40.557844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.219 [2024-07-10 14:38:40.567024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.219 [2024-07-10 14:38:40.567534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.219 [2024-07-10 14:38:40.567576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.219 [2024-07-10 14:38:40.567602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.219 [2024-07-10 14:38:40.567895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.219 [2024-07-10 14:38:40.568188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.219 [2024-07-10 14:38:40.568220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.219 [2024-07-10 14:38:40.568242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.219 [2024-07-10 14:38:40.572450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.219 [2024-07-10 14:38:40.581566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.219 [2024-07-10 14:38:40.582062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.219 [2024-07-10 14:38:40.582103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.219 [2024-07-10 14:38:40.582128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.219 [2024-07-10 14:38:40.582417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.219 [2024-07-10 14:38:40.582719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.219 [2024-07-10 14:38:40.582750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.219 [2024-07-10 14:38:40.582773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.219 [2024-07-10 14:38:40.586919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.219 [2024-07-10 14:38:40.596221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.219 [2024-07-10 14:38:40.596775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.219 [2024-07-10 14:38:40.596818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.219 [2024-07-10 14:38:40.596844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.219 [2024-07-10 14:38:40.597149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.219 [2024-07-10 14:38:40.597453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.219 [2024-07-10 14:38:40.597485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.219 [2024-07-10 14:38:40.597508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.219 [2024-07-10 14:38:40.601670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.219 [2024-07-10 14:38:40.610729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.219 [2024-07-10 14:38:40.611208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.219 [2024-07-10 14:38:40.611250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.219 [2024-07-10 14:38:40.611282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.219 [2024-07-10 14:38:40.611582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.219 [2024-07-10 14:38:40.611873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.219 [2024-07-10 14:38:40.611905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.219 [2024-07-10 14:38:40.611926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.219 [2024-07-10 14:38:40.616079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.219 [2024-07-10 14:38:40.625393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.219 [2024-07-10 14:38:40.625914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.219 [2024-07-10 14:38:40.625955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.219 [2024-07-10 14:38:40.625981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.219 [2024-07-10 14:38:40.626269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.219 [2024-07-10 14:38:40.626574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.219 [2024-07-10 14:38:40.626606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.219 [2024-07-10 14:38:40.626628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.219 [2024-07-10 14:38:40.630816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.219 [2024-07-10 14:38:40.639901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.219 [2024-07-10 14:38:40.640374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.219 [2024-07-10 14:38:40.640415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.219 [2024-07-10 14:38:40.640452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.219 [2024-07-10 14:38:40.640741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.219 [2024-07-10 14:38:40.641032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.219 [2024-07-10 14:38:40.641063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.219 [2024-07-10 14:38:40.641085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.219 [2024-07-10 14:38:40.645246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.219 EAL: No free 2048 kB hugepages reported on node 1 00:36:31.219 [2024-07-10 14:38:40.654510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.219 [2024-07-10 14:38:40.655039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.219 [2024-07-10 14:38:40.655079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.219 [2024-07-10 14:38:40.655105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.219 [2024-07-10 14:38:40.655391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.219 [2024-07-10 14:38:40.655690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.219 [2024-07-10 14:38:40.655728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.219 [2024-07-10 14:38:40.655750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.219 [2024-07-10 14:38:40.659870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.219 [2024-07-10 14:38:40.669129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.219 [2024-07-10 14:38:40.669597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.219 [2024-07-10 14:38:40.669639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.219 [2024-07-10 14:38:40.669665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.219 [2024-07-10 14:38:40.669953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.219 [2024-07-10 14:38:40.670242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.219 [2024-07-10 14:38:40.670273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.219 [2024-07-10 14:38:40.670295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.219 [2024-07-10 14:38:40.674436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.219 [2024-07-10 14:38:40.683682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.219 [2024-07-10 14:38:40.684195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.219 [2024-07-10 14:38:40.684236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.219 [2024-07-10 14:38:40.684261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.219 [2024-07-10 14:38:40.684565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.219 [2024-07-10 14:38:40.684854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.219 [2024-07-10 14:38:40.684885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.219 [2024-07-10 14:38:40.684909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.219 [2024-07-10 14:38:40.689102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.478 [2024-07-10 14:38:40.698495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.478 [2024-07-10 14:38:40.699002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.479 [2024-07-10 14:38:40.699044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.479 [2024-07-10 14:38:40.699070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.479 [2024-07-10 14:38:40.699356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.479 [2024-07-10 14:38:40.699666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.479 [2024-07-10 14:38:40.699698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.479 [2024-07-10 14:38:40.699726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.479 [2024-07-10 14:38:40.704051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.479 [2024-07-10 14:38:40.713150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.479 [2024-07-10 14:38:40.713666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.479 [2024-07-10 14:38:40.713712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.479 [2024-07-10 14:38:40.713738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.479 [2024-07-10 14:38:40.714024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.479 [2024-07-10 14:38:40.714313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.479 [2024-07-10 14:38:40.714345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.479 [2024-07-10 14:38:40.714367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.479 [2024-07-10 14:38:40.718546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.479 [2024-07-10 14:38:40.718770] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:31.479 [2024-07-10 14:38:40.727846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.479 [2024-07-10 14:38:40.728421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.479 [2024-07-10 14:38:40.728472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.479 [2024-07-10 14:38:40.728500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.479 [2024-07-10 14:38:40.728824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.479 [2024-07-10 14:38:40.729123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.479 [2024-07-10 14:38:40.729155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.479 [2024-07-10 14:38:40.729179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.479 [2024-07-10 14:38:40.733447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.479 [2024-07-10 14:38:40.742435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.479 [2024-07-10 14:38:40.743071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.479 [2024-07-10 14:38:40.743118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.479 [2024-07-10 14:38:40.743145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.479 [2024-07-10 14:38:40.743449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.479 [2024-07-10 14:38:40.743760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.479 [2024-07-10 14:38:40.743792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.479 [2024-07-10 14:38:40.743816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.479 [2024-07-10 14:38:40.748008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.479 [2024-07-10 14:38:40.757188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.479 [2024-07-10 14:38:40.757686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.479 [2024-07-10 14:38:40.757743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.479 [2024-07-10 14:38:40.757780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.479 [2024-07-10 14:38:40.758067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.479 [2024-07-10 14:38:40.758358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.479 [2024-07-10 14:38:40.758400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.479 [2024-07-10 14:38:40.758423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.479 [2024-07-10 14:38:40.762685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.479 [2024-07-10 14:38:40.771864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.479 [2024-07-10 14:38:40.772389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.479 [2024-07-10 14:38:40.772438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.479 [2024-07-10 14:38:40.772466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.479 [2024-07-10 14:38:40.772755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.479 [2024-07-10 14:38:40.773044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.479 [2024-07-10 14:38:40.773075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.479 [2024-07-10 14:38:40.773097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.479 [2024-07-10 14:38:40.777266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.479 [2024-07-10 14:38:40.786273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.479 [2024-07-10 14:38:40.786766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.479 [2024-07-10 14:38:40.786807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.479 [2024-07-10 14:38:40.786832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.479 [2024-07-10 14:38:40.787119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.479 [2024-07-10 14:38:40.787406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.479 [2024-07-10 14:38:40.787448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.479 [2024-07-10 14:38:40.787472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.479 [2024-07-10 14:38:40.791604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.479 [2024-07-10 14:38:40.801038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.479 [2024-07-10 14:38:40.801540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.479 [2024-07-10 14:38:40.801583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.479 [2024-07-10 14:38:40.801609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.479 [2024-07-10 14:38:40.801907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.479 [2024-07-10 14:38:40.802211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.479 [2024-07-10 14:38:40.802256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.479 [2024-07-10 14:38:40.802279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.479 [2024-07-10 14:38:40.806473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.479 [2024-07-10 14:38:40.815569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.479 [2024-07-10 14:38:40.816048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.479 [2024-07-10 14:38:40.816089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.479 [2024-07-10 14:38:40.816115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.479 [2024-07-10 14:38:40.816405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.479 [2024-07-10 14:38:40.816720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.479 [2024-07-10 14:38:40.816752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.479 [2024-07-10 14:38:40.816779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.479 [2024-07-10 14:38:40.820983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.479 [2024-07-10 14:38:40.830167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.479 [2024-07-10 14:38:40.830689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.479 [2024-07-10 14:38:40.830730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.479 [2024-07-10 14:38:40.830755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.479 [2024-07-10 14:38:40.831045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.479 [2024-07-10 14:38:40.831337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.479 [2024-07-10 14:38:40.831368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.479 [2024-07-10 14:38:40.831391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.479 [2024-07-10 14:38:40.835634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.479 [2024-07-10 14:38:40.844795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.479 [2024-07-10 14:38:40.845303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.479 [2024-07-10 14:38:40.845344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.479 [2024-07-10 14:38:40.845369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.479 [2024-07-10 14:38:40.845665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.479 [2024-07-10 14:38:40.845954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.480 [2024-07-10 14:38:40.845985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.480 [2024-07-10 14:38:40.846008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.480 [2024-07-10 14:38:40.850168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.480 [2024-07-10 14:38:40.859375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.480 [2024-07-10 14:38:40.860048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.480 [2024-07-10 14:38:40.860098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.480 [2024-07-10 14:38:40.860127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.480 [2024-07-10 14:38:40.860422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.480 [2024-07-10 14:38:40.860738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.480 [2024-07-10 14:38:40.860770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.480 [2024-07-10 14:38:40.860795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.480 [2024-07-10 14:38:40.865016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.480 [2024-07-10 14:38:40.874187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.480 [2024-07-10 14:38:40.874731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.480 [2024-07-10 14:38:40.874774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.480 [2024-07-10 14:38:40.874800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.480 [2024-07-10 14:38:40.875090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.480 [2024-07-10 14:38:40.875386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.480 [2024-07-10 14:38:40.875418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.480 [2024-07-10 14:38:40.875454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.480 [2024-07-10 14:38:40.879656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.480 [2024-07-10 14:38:40.888891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.480 [2024-07-10 14:38:40.889422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.480 [2024-07-10 14:38:40.889473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.480 [2024-07-10 14:38:40.889499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.480 [2024-07-10 14:38:40.889795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.480 [2024-07-10 14:38:40.890089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.480 [2024-07-10 14:38:40.890120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.480 [2024-07-10 14:38:40.890143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.480 [2024-07-10 14:38:40.894396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.480 [2024-07-10 14:38:40.903642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.480 [2024-07-10 14:38:40.904156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.480 [2024-07-10 14:38:40.904204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.480 [2024-07-10 14:38:40.904231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.480 [2024-07-10 14:38:40.904536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.480 [2024-07-10 14:38:40.904828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.480 [2024-07-10 14:38:40.904859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.480 [2024-07-10 14:38:40.904882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.480 [2024-07-10 14:38:40.909095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.480 [2024-07-10 14:38:40.918086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.480 [2024-07-10 14:38:40.918608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.480 [2024-07-10 14:38:40.918651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.480 [2024-07-10 14:38:40.918676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.480 [2024-07-10 14:38:40.918962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.480 [2024-07-10 14:38:40.919250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.480 [2024-07-10 14:38:40.919281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.480 [2024-07-10 14:38:40.919303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.480 [2024-07-10 14:38:40.923414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.480 [2024-07-10 14:38:40.932650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.480 [2024-07-10 14:38:40.933188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.480 [2024-07-10 14:38:40.933230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.480 [2024-07-10 14:38:40.933256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.480 [2024-07-10 14:38:40.933556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.480 [2024-07-10 14:38:40.933844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.480 [2024-07-10 14:38:40.933875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.480 [2024-07-10 14:38:40.933897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.480 [2024-07-10 14:38:40.938029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.480 [2024-07-10 14:38:40.947276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.480 [2024-07-10 14:38:40.947780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.480 [2024-07-10 14:38:40.947821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.480 [2024-07-10 14:38:40.947846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.480 [2024-07-10 14:38:40.948132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.480 [2024-07-10 14:38:40.948444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.480 [2024-07-10 14:38:40.948476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.480 [2024-07-10 14:38:40.948498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.480 [2024-07-10 14:38:40.952681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.740 [2024-07-10 14:38:40.961886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.740 [2024-07-10 14:38:40.962387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-07-10 14:38:40.962440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.740 [2024-07-10 14:38:40.962468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.740 [2024-07-10 14:38:40.962765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.740 [2024-07-10 14:38:40.963116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.740 [2024-07-10 14:38:40.963150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.740 [2024-07-10 14:38:40.963173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.740 [2024-07-10 14:38:40.967432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.740 [2024-07-10 14:38:40.976553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.740 [2024-07-10 14:38:40.977080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-07-10 14:38:40.977120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.740 [2024-07-10 14:38:40.977147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.740 [2024-07-10 14:38:40.977446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.740 [2024-07-10 14:38:40.977737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.740 [2024-07-10 14:38:40.977768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.740 [2024-07-10 14:38:40.977790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.740 [2024-07-10 14:38:40.981945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.740 [2024-07-10 14:38:40.985003] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:31.740 [2024-07-10 14:38:40.985046] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:31.740 [2024-07-10 14:38:40.985080] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:31.740 [2024-07-10 14:38:40.985100] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:31.740 [2024-07-10 14:38:40.985121] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:31.740 [2024-07-10 14:38:40.985243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:31.740 [2024-07-10 14:38:40.985291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:31.740 [2024-07-10 14:38:40.985302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:31.740 [2024-07-10 14:38:40.991039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.740 [2024-07-10 14:38:40.991647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-07-10 14:38:40.991693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.740 [2024-07-10 14:38:40.991721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.740 [2024-07-10 14:38:40.992015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.740 [2024-07-10 14:38:40.992314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.740 [2024-07-10 14:38:40.992345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.740 [2024-07-10 14:38:40.992369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.740 [2024-07-10 14:38:40.996592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.740 [2024-07-10 14:38:41.005744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.740 [2024-07-10 14:38:41.006484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-07-10 14:38:41.006538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.740 [2024-07-10 14:38:41.006569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.740 [2024-07-10 14:38:41.006868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.740 [2024-07-10 14:38:41.007163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.740 [2024-07-10 14:38:41.007197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.740 [2024-07-10 14:38:41.007223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.740 [2024-07-10 14:38:41.011421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.740 [2024-07-10 14:38:41.020460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.740 [2024-07-10 14:38:41.020983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-07-10 14:38:41.021024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.740 [2024-07-10 14:38:41.021050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.740 [2024-07-10 14:38:41.021341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.740 [2024-07-10 14:38:41.021651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.740 [2024-07-10 14:38:41.021684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.740 [2024-07-10 14:38:41.021707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.740 [2024-07-10 14:38:41.025963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.740 [2024-07-10 14:38:41.035245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.740 [2024-07-10 14:38:41.035771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-07-10 14:38:41.035813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.740 [2024-07-10 14:38:41.035839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.740 [2024-07-10 14:38:41.036141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.740 [2024-07-10 14:38:41.036448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.740 [2024-07-10 14:38:41.036480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.740 [2024-07-10 14:38:41.036503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.740 [2024-07-10 14:38:41.040746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.740 [2024-07-10 14:38:41.049826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.740 [2024-07-10 14:38:41.050355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-07-10 14:38:41.050396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.740 [2024-07-10 14:38:41.050422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.740 [2024-07-10 14:38:41.050723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.740 [2024-07-10 14:38:41.051015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.740 [2024-07-10 14:38:41.051046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.740 [2024-07-10 14:38:41.051068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.740 [2024-07-10 14:38:41.055258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.740 [2024-07-10 14:38:41.064363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.740 [2024-07-10 14:38:41.064898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-07-10 14:38:41.064941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.740 [2024-07-10 14:38:41.064967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.740 [2024-07-10 14:38:41.065257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.740 [2024-07-10 14:38:41.065561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.740 [2024-07-10 14:38:41.065593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.741 [2024-07-10 14:38:41.065615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.741 [2024-07-10 14:38:41.069820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.741 [2024-07-10 14:38:41.079061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.741 [2024-07-10 14:38:41.079802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-07-10 14:38:41.079857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.741 [2024-07-10 14:38:41.079889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.741 [2024-07-10 14:38:41.080199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.741 [2024-07-10 14:38:41.080518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.741 [2024-07-10 14:38:41.080552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.741 [2024-07-10 14:38:41.080588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.741 [2024-07-10 14:38:41.084850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.741 [2024-07-10 14:38:41.093834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.741 [2024-07-10 14:38:41.094564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-07-10 14:38:41.094620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.741 [2024-07-10 14:38:41.094650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.741 [2024-07-10 14:38:41.094955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.741 [2024-07-10 14:38:41.095255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.741 [2024-07-10 14:38:41.095287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.741 [2024-07-10 14:38:41.095313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.741 [2024-07-10 14:38:41.099554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.741 [2024-07-10 14:38:41.108525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.741 [2024-07-10 14:38:41.109119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-07-10 14:38:41.109164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.741 [2024-07-10 14:38:41.109191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.741 [2024-07-10 14:38:41.109498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.741 [2024-07-10 14:38:41.109795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.741 [2024-07-10 14:38:41.109827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.741 [2024-07-10 14:38:41.109850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.741 [2024-07-10 14:38:41.114014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.741 [2024-07-10 14:38:41.123092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.741 [2024-07-10 14:38:41.123626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-07-10 14:38:41.123668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.741 [2024-07-10 14:38:41.123694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.741 [2024-07-10 14:38:41.123981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.741 [2024-07-10 14:38:41.124274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.741 [2024-07-10 14:38:41.124305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.741 [2024-07-10 14:38:41.124327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.741 [2024-07-10 14:38:41.128500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.741 [2024-07-10 14:38:41.137572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.741 [2024-07-10 14:38:41.138062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-07-10 14:38:41.138102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.741 [2024-07-10 14:38:41.138128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.741 [2024-07-10 14:38:41.138416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.741 [2024-07-10 14:38:41.138717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.741 [2024-07-10 14:38:41.138748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.741 [2024-07-10 14:38:41.138770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.741 [2024-07-10 14:38:41.142903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.741 [2024-07-10 14:38:41.152304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.741 [2024-07-10 14:38:41.152760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-07-10 14:38:41.152801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.741 [2024-07-10 14:38:41.152827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.741 [2024-07-10 14:38:41.153117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.741 [2024-07-10 14:38:41.153408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.741 [2024-07-10 14:38:41.153450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.741 [2024-07-10 14:38:41.153474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.741 [2024-07-10 14:38:41.157701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.741 [2024-07-10 14:38:41.166796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.741 [2024-07-10 14:38:41.167291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-07-10 14:38:41.167333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.741 [2024-07-10 14:38:41.167359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.741 [2024-07-10 14:38:41.167660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.741 [2024-07-10 14:38:41.167947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.741 [2024-07-10 14:38:41.167978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.741 [2024-07-10 14:38:41.167999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.741 [2024-07-10 14:38:41.172102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.741 [2024-07-10 14:38:41.181308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.741 [2024-07-10 14:38:41.181794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-07-10 14:38:41.181836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.741 [2024-07-10 14:38:41.181861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.741 [2024-07-10 14:38:41.182150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.741 [2024-07-10 14:38:41.182447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.741 [2024-07-10 14:38:41.182479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.741 [2024-07-10 14:38:41.182501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.741 [2024-07-10 14:38:41.186597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.741 [2024-07-10 14:38:41.195753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.741 [2024-07-10 14:38:41.196208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-07-10 14:38:41.196249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.741 [2024-07-10 14:38:41.196275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.741 [2024-07-10 14:38:41.196571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.741 [2024-07-10 14:38:41.196856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.741 [2024-07-10 14:38:41.196887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.741 [2024-07-10 14:38:41.196909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.741 [2024-07-10 14:38:41.201003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:31.741 [2024-07-10 14:38:41.210172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.741 [2024-07-10 14:38:41.210694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-07-10 14:38:41.210734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:31.741 [2024-07-10 14:38:41.210759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:31.741 [2024-07-10 14:38:41.211047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:31.741 [2024-07-10 14:38:41.211333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:31.741 [2024-07-10 14:38:41.211364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:31.741 [2024-07-10 14:38:41.211386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:31.741 [2024-07-10 14:38:41.215571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.002 [2024-07-10 14:38:41.224836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.002 [2024-07-10 14:38:41.225388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.002 [2024-07-10 14:38:41.225440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.002 [2024-07-10 14:38:41.225470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.002 [2024-07-10 14:38:41.225760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.002 [2024-07-10 14:38:41.226055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.002 [2024-07-10 14:38:41.226087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.002 [2024-07-10 14:38:41.226115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.002 [2024-07-10 14:38:41.230356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.002 [2024-07-10 14:38:41.239537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.002 [2024-07-10 14:38:41.240272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.002 [2024-07-10 14:38:41.240325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.002 [2024-07-10 14:38:41.240355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.002 [2024-07-10 14:38:41.240669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.002 [2024-07-10 14:38:41.240966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.002 [2024-07-10 14:38:41.240999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.002 [2024-07-10 14:38:41.241025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.002 [2024-07-10 14:38:41.245222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.002 [2024-07-10 14:38:41.254070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.002 [2024-07-10 14:38:41.254619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.002 [2024-07-10 14:38:41.254664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.002 [2024-07-10 14:38:41.254690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.002 [2024-07-10 14:38:41.254983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.002 [2024-07-10 14:38:41.255276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.002 [2024-07-10 14:38:41.255307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.002 [2024-07-10 14:38:41.255330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.002 [2024-07-10 14:38:41.259504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.002 [2024-07-10 14:38:41.268610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.002 [2024-07-10 14:38:41.269112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.002 [2024-07-10 14:38:41.269152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.002 [2024-07-10 14:38:41.269178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.002 [2024-07-10 14:38:41.269479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.002 [2024-07-10 14:38:41.269772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.002 [2024-07-10 14:38:41.269804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.003 [2024-07-10 14:38:41.269826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.003 [2024-07-10 14:38:41.274039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.003 [2024-07-10 14:38:41.283204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.003 [2024-07-10 14:38:41.283766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.003 [2024-07-10 14:38:41.283806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.003 [2024-07-10 14:38:41.283832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.003 [2024-07-10 14:38:41.284131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.003 [2024-07-10 14:38:41.284420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.003 [2024-07-10 14:38:41.284461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.003 [2024-07-10 14:38:41.284483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.003 [2024-07-10 14:38:41.288680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.003 [2024-07-10 14:38:41.297857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.003 [2024-07-10 14:38:41.298317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.003 [2024-07-10 14:38:41.298358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.003 [2024-07-10 14:38:41.298384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.003 [2024-07-10 14:38:41.298688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.003 [2024-07-10 14:38:41.298992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.003 [2024-07-10 14:38:41.299023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.003 [2024-07-10 14:38:41.299044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.003 [2024-07-10 14:38:41.303264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.003 [2024-07-10 14:38:41.312411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.003 [2024-07-10 14:38:41.312925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.003 [2024-07-10 14:38:41.312965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.003 [2024-07-10 14:38:41.312991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.003 [2024-07-10 14:38:41.313294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.003 [2024-07-10 14:38:41.313593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.003 [2024-07-10 14:38:41.313625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.003 [2024-07-10 14:38:41.313648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.003 [2024-07-10 14:38:41.317812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.003 [2024-07-10 14:38:41.327085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.003 [2024-07-10 14:38:41.327622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.003 [2024-07-10 14:38:41.327664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.003 [2024-07-10 14:38:41.327689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.003 [2024-07-10 14:38:41.327996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.003 [2024-07-10 14:38:41.328285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.003 [2024-07-10 14:38:41.328317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.003 [2024-07-10 14:38:41.328339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.003 [2024-07-10 14:38:41.332561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.003 [2024-07-10 14:38:41.341698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.003 [2024-07-10 14:38:41.342261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.003 [2024-07-10 14:38:41.342315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.003 [2024-07-10 14:38:41.342342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.003 [2024-07-10 14:38:41.342645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.003 [2024-07-10 14:38:41.342950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.003 [2024-07-10 14:38:41.342982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.003 [2024-07-10 14:38:41.343007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.003 [2024-07-10 14:38:41.347211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.003 [2024-07-10 14:38:41.356471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.003 [2024-07-10 14:38:41.357027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.003 [2024-07-10 14:38:41.357068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.003 [2024-07-10 14:38:41.357105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.003 [2024-07-10 14:38:41.357395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.003 [2024-07-10 14:38:41.357702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.003 [2024-07-10 14:38:41.357743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.003 [2024-07-10 14:38:41.357765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.003 [2024-07-10 14:38:41.362030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.003 [2024-07-10 14:38:41.371007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.003 [2024-07-10 14:38:41.371503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.003 [2024-07-10 14:38:41.371545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.003 [2024-07-10 14:38:41.371572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.003 [2024-07-10 14:38:41.371869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.003 [2024-07-10 14:38:41.372159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.003 [2024-07-10 14:38:41.372195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.003 [2024-07-10 14:38:41.372219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.003 [2024-07-10 14:38:41.376527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.003 [2024-07-10 14:38:41.385699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.003 [2024-07-10 14:38:41.386195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.003 [2024-07-10 14:38:41.386236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.003 [2024-07-10 14:38:41.386261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.003 [2024-07-10 14:38:41.386562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.003 [2024-07-10 14:38:41.386870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.003 [2024-07-10 14:38:41.386901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.003 [2024-07-10 14:38:41.386922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.003 [2024-07-10 14:38:41.391077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.003 [2024-07-10 14:38:41.400339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.003 [2024-07-10 14:38:41.400814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.003 [2024-07-10 14:38:41.400856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.004 [2024-07-10 14:38:41.400881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.004 [2024-07-10 14:38:41.401168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.004 [2024-07-10 14:38:41.401471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.004 [2024-07-10 14:38:41.401504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.004 [2024-07-10 14:38:41.401526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.004 [2024-07-10 14:38:41.405679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.004 [2024-07-10 14:38:41.414904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.004 [2024-07-10 14:38:41.415377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.004 [2024-07-10 14:38:41.415418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.004 [2024-07-10 14:38:41.415455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.004 [2024-07-10 14:38:41.415739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.004 [2024-07-10 14:38:41.416025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.004 [2024-07-10 14:38:41.416056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.004 [2024-07-10 14:38:41.416078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.004 [2024-07-10 14:38:41.420179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.004 [2024-07-10 14:38:41.429362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.004 [2024-07-10 14:38:41.429877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.004 [2024-07-10 14:38:41.429918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.004 [2024-07-10 14:38:41.429960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.004 [2024-07-10 14:38:41.430246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.004 [2024-07-10 14:38:41.430544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.004 [2024-07-10 14:38:41.430576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.004 [2024-07-10 14:38:41.430598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.004 [2024-07-10 14:38:41.434680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.004 [2024-07-10 14:38:41.443821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.004 [2024-07-10 14:38:41.444306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.004 [2024-07-10 14:38:41.444347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.004 [2024-07-10 14:38:41.444372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.004 [2024-07-10 14:38:41.444666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.004 [2024-07-10 14:38:41.444953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.004 [2024-07-10 14:38:41.444984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.004 [2024-07-10 14:38:41.445006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.004 [2024-07-10 14:38:41.449082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.004 [2024-07-10 14:38:41.458212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.004 [2024-07-10 14:38:41.458674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.004 [2024-07-10 14:38:41.458715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.004 [2024-07-10 14:38:41.458749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.004 [2024-07-10 14:38:41.459030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.004 [2024-07-10 14:38:41.459325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.004 [2024-07-10 14:38:41.459357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.004 [2024-07-10 14:38:41.459379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.004 [2024-07-10 14:38:41.463476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.004 [2024-07-10 14:38:41.472604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.004 [2024-07-10 14:38:41.473088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.004 [2024-07-10 14:38:41.473129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.004 [2024-07-10 14:38:41.473160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.004 [2024-07-10 14:38:41.473454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.004 [2024-07-10 14:38:41.473741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.004 [2024-07-10 14:38:41.473772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.004 [2024-07-10 14:38:41.473794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.004 [2024-07-10 14:38:41.477960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.264 [2024-07-10 14:38:41.486688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.264 [2024-07-10 14:38:41.487149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.264 [2024-07-10 14:38:41.487200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.264 [2024-07-10 14:38:41.487227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.264 [2024-07-10 14:38:41.487496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.264 [2024-07-10 14:38:41.487755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.264 [2024-07-10 14:38:41.487783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.264 [2024-07-10 14:38:41.487804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.264 [2024-07-10 14:38:41.491578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.264 14:38:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:32.264 14:38:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:36:32.264 14:38:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:32.264 14:38:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:32.264 14:38:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:32.264 [2024-07-10 14:38:41.500933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.264 [2024-07-10 14:38:41.501371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.264 [2024-07-10 14:38:41.501415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.264 [2024-07-10 14:38:41.501448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.264 [2024-07-10 14:38:41.501730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.264 [2024-07-10 14:38:41.502001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.264 [2024-07-10 14:38:41.502036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.264 [2024-07-10 14:38:41.502055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.264 [2024-07-10 14:38:41.505852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.264 14:38:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:32.264 14:38:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:32.264 14:38:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.264 14:38:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:32.264 [2024-07-10 14:38:41.515240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.264 [2024-07-10 14:38:41.515782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.264 [2024-07-10 14:38:41.515827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.264 [2024-07-10 14:38:41.515850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.264 [2024-07-10 14:38:41.516123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.264 [2024-07-10 14:38:41.516375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.264 [2024-07-10 14:38:41.516402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.264 [2024-07-10 14:38:41.516455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.264 [2024-07-10 14:38:41.520185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.264 [2024-07-10 14:38:41.520719] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:32.264 [2024-07-10 14:38:41.529276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.264 [2024-07-10 14:38:41.529774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.264 [2024-07-10 14:38:41.529812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.264 [2024-07-10 14:38:41.529835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.264 [2024-07-10 14:38:41.530119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.264 [2024-07-10 14:38:41.530364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.264 [2024-07-10 14:38:41.530391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.264 [2024-07-10 14:38:41.530434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.264 [2024-07-10 14:38:41.534127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.264 [2024-07-10 14:38:41.543384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.264 [2024-07-10 14:38:41.543868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.264 [2024-07-10 14:38:41.543905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.264 [2024-07-10 14:38:41.543929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.264 [2024-07-10 14:38:41.544187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.264 [2024-07-10 14:38:41.544465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.264 [2024-07-10 14:38:41.544493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.264 [2024-07-10 14:38:41.544514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.264 14:38:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.264 14:38:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:32.264 14:38:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.264 14:38:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:32.264 [2024-07-10 14:38:41.549105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.264 [2024-07-10 14:38:41.557735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.264 [2024-07-10 14:38:41.558449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.264 [2024-07-10 14:38:41.558500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.264 [2024-07-10 14:38:41.558529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.264 [2024-07-10 14:38:41.558829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.264 [2024-07-10 14:38:41.559091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.264 [2024-07-10 14:38:41.559120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.264 [2024-07-10 14:38:41.559151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.264 [2024-07-10 14:38:41.563005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.264 [2024-07-10 14:38:41.572005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.264 [2024-07-10 14:38:41.572578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.264 [2024-07-10 14:38:41.572624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.264 [2024-07-10 14:38:41.572651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.264 [2024-07-10 14:38:41.572929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.264 [2024-07-10 14:38:41.573188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.264 [2024-07-10 14:38:41.573217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.264 [2024-07-10 14:38:41.573239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.264 [2024-07-10 14:38:41.576947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.264 [2024-07-10 14:38:41.586080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.264 [2024-07-10 14:38:41.586571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.264 [2024-07-10 14:38:41.586608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.264 [2024-07-10 14:38:41.586632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.264 [2024-07-10 14:38:41.586905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.264 [2024-07-10 14:38:41.587159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.264 [2024-07-10 14:38:41.587186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.264 [2024-07-10 14:38:41.587206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.264 [2024-07-10 14:38:41.590946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.265 [2024-07-10 14:38:41.600127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.265 [2024-07-10 14:38:41.600600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.265 [2024-07-10 14:38:41.600637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.265 [2024-07-10 14:38:41.600665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.265 [2024-07-10 14:38:41.600939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.265 [2024-07-10 14:38:41.601194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.265 [2024-07-10 14:38:41.601220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.265 [2024-07-10 14:38:41.601239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.265 [2024-07-10 14:38:41.604941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.265 [2024-07-10 14:38:41.614185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.265 [2024-07-10 14:38:41.614667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.265 [2024-07-10 14:38:41.614704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.265 [2024-07-10 14:38:41.614727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.265 [2024-07-10 14:38:41.615006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.265 [2024-07-10 14:38:41.615258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.265 [2024-07-10 14:38:41.615285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.265 [2024-07-10 14:38:41.615303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.265 [2024-07-10 14:38:41.618986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.265 Malloc0 00:36:32.265 14:38:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.265 14:38:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:32.265 14:38:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.265 14:38:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:32.265 14:38:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.265 14:38:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:32.265 14:38:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.265 14:38:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:32.265 [2024-07-10 14:38:41.628440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.265 [2024-07-10 14:38:41.629096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.265 [2024-07-10 14:38:41.629148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:32.265 [2024-07-10 14:38:41.629181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:32.265 [2024-07-10 14:38:41.629600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:32.265 [2024-07-10 14:38:41.629904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:32.265 [2024-07-10 14:38:41.629933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:32.265 [2024-07-10 14:38:41.629953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:32.265 [2024-07-10 14:38:41.633587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.265 14:38:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.265 14:38:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:32.265 14:38:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.265 14:38:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:32.265 [2024-07-10 14:38:41.639687] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:32.265 [2024-07-10 14:38:41.642539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:32.265 14:38:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.265 14:38:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1550534 00:36:32.523 [2024-07-10 14:38:41.770760] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:42.490 00:36:42.490 Latency(us) 00:36:42.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:42.490 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:42.490 Verification LBA range: start 0x0 length 0x4000 00:36:42.490 Nvme1n1 : 15.01 4448.20 17.38 9121.08 0.00 9404.41 1650.54 40777.96 00:36:42.490 =================================================================================================================== 00:36:42.490 Total : 4448.20 17.38 9121.08 0.00 9404.41 1650.54 40777.96 00:36:42.490 14:38:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:36:42.490 14:38:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:42.490 14:38:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.490 14:38:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:42.490 14:38:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.490 14:38:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:36:42.490 14:38:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:36:42.490 14:38:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:42.490 14:38:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:36:42.490 14:38:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:42.490 14:38:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:36:42.490 14:38:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:42.491 14:38:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:42.491 rmmod nvme_tcp 00:36:42.491 rmmod nvme_fabrics 00:36:42.491 rmmod nvme_keyring 00:36:42.491 14:38:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:42.491 14:38:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:36:42.491 14:38:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:36:42.491 14:38:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1551317 ']' 00:36:42.491 14:38:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1551317 00:36:42.491 14:38:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1551317 ']' 00:36:42.491 14:38:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1551317 00:36:42.491 14:38:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:36:42.491 14:38:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:42.491 14:38:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1551317 00:36:42.491 14:38:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:42.491 14:38:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:42.491 14:38:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1551317' 00:36:42.491 killing process with pid 1551317 00:36:42.491 14:38:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1551317 00:36:42.491 14:38:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1551317 00:36:43.424 14:38:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:43.424 14:38:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:43.424 14:38:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:43.424 14:38:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:43.424 14:38:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:43.424 14:38:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:43.424 14:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:43.424 14:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:45.949 14:38:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:45.949 00:36:45.949 real 0m26.630s 00:36:45.949 user 1m13.896s 00:36:45.949 sys 0m4.334s 00:36:45.949 14:38:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:45.949 14:38:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:45.949 ************************************ 00:36:45.949 END TEST nvmf_bdevperf 00:36:45.949 ************************************ 00:36:45.949 14:38:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:45.949 14:38:54 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:45.949 14:38:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:45.949 14:38:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:45.949 14:38:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:45.949 ************************************ 00:36:45.949 START TEST nvmf_target_disconnect 00:36:45.949 ************************************ 00:36:45.949 14:38:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:45.949 * Looking for test storage... 00:36:45.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.949 14:38:55 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:36:45.950 14:38:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:47.853 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:47.853 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:47.853 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:47.853 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:47.853 14:38:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:47.853 14:38:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:47.853 14:38:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:47.853 14:38:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:47.853 14:38:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:47.853 14:38:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:47.853 14:38:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:47.853 14:38:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:47.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:47.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:36:47.853 00:36:47.853 --- 10.0.0.2 ping statistics --- 00:36:47.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:47.853 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:36:47.853 14:38:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:47.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:47.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:36:47.853 00:36:47.853 --- 10.0.0.1 ping statistics --- 00:36:47.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:47.853 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:36:47.853 14:38:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:47.853 14:38:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:36:47.853 14:38:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:47.853 14:38:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:47.853 14:38:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:47.853 14:38:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:47.853 14:38:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:47.854 14:38:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:47.854 14:38:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:47.854 14:38:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:47.854 14:38:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:47.854 14:38:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:47.854 14:38:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:47.854 ************************************ 00:36:47.854 START TEST nvmf_target_disconnect_tc1 00:36:47.854 ************************************ 00:36:47.854 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:36:47.854 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:47.854 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:36:47.854 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:47.854 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:47.854 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:47.854 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:47.854 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:47.854 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:47.854 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:47.854 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:47.854 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:47.854 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:47.854 EAL: No free 2048 kB hugepages reported on node 1 00:36:48.113 [2024-07-10 14:38:57.341486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.113 [2024-07-10 14:38:57.341589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:36:48.113 [2024-07-10 14:38:57.341672] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:48.113 [2024-07-10 14:38:57.341702] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:48.113 [2024-07-10 14:38:57.341751] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:36:48.113 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:48.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:48.113 Initializing NVMe Controllers 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:48.113 00:36:48.113 real 0m0.235s 00:36:48.113 user 0m0.099s 00:36:48.113 sys 0m0.134s 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:48.113 ************************************ 00:36:48.113 END TEST nvmf_target_disconnect_tc1 00:36:48.113 ************************************ 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:48.113 ************************************ 00:36:48.113 START TEST nvmf_target_disconnect_tc2 00:36:48.113 ************************************ 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1554735 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1554735 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1554735 ']' 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:48.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:48.113 14:38:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:48.113 [2024-07-10 14:38:57.511785] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:36:48.113 [2024-07-10 14:38:57.511922] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:48.113 EAL: No free 2048 kB hugepages reported on node 1 00:36:48.372 [2024-07-10 14:38:57.642489] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:48.630 [2024-07-10 14:38:57.873498] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:48.630 [2024-07-10 14:38:57.873560] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:48.630 [2024-07-10 14:38:57.873600] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:48.630 [2024-07-10 14:38:57.873619] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:48.630 [2024-07-10 14:38:57.873637] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:48.630 [2024-07-10 14:38:57.873772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:36:48.630 [2024-07-10 14:38:57.873898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:36:48.630 [2024-07-10 14:38:57.873938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:36:48.630 [2024-07-10 14:38:57.873948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:49.194 Malloc0 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:49.194 [2024-07-10 14:38:58.560623] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:49.194 [2024-07-10 14:38:58.590282] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1554885 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:49.194 14:38:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:49.451 EAL: No free 2048 kB hugepages reported on node 1 00:36:51.358 14:39:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1554735 00:36:51.358 14:39:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:51.358 Read completed with error (sct=0, sc=8) 00:36:51.358 starting I/O failed 00:36:51.358 Read completed with error (sct=0, sc=8) 00:36:51.358 starting I/O failed 00:36:51.358 Read completed with error (sct=0, sc=8) 00:36:51.358 starting I/O failed 00:36:51.358 Read completed with error (sct=0, sc=8) 00:36:51.358 starting I/O failed 00:36:51.358 Read completed with error (sct=0, sc=8) 00:36:51.358 starting I/O failed 00:36:51.358 Read completed with error (sct=0, sc=8) 00:36:51.358 starting I/O failed 00:36:51.358 Read completed with error (sct=0, sc=8) 00:36:51.358 starting I/O failed 00:36:51.358 Read completed with error (sct=0, sc=8) 00:36:51.358 starting I/O failed 00:36:51.358 Read completed with error (sct=0, sc=8) 00:36:51.358 starting I/O failed 00:36:51.358 Read completed with error (sct=0, sc=8) 00:36:51.358 starting I/O failed 00:36:51.358 Read completed with error (sct=0, sc=8) 00:36:51.358 starting I/O failed 00:36:51.358 Read completed with error (sct=0, sc=8) 00:36:51.358 starting I/O failed 00:36:51.358 Read completed with error (sct=0, sc=8) 00:36:51.358 starting I/O failed 00:36:51.358 Read completed with error (sct=0, sc=8) 00:36:51.358 starting I/O failed 00:36:51.358 Read completed with error (sct=0, sc=8) 00:36:51.358 starting I/O failed 00:36:51.358 Read completed with error (sct=0, sc=8) 00:36:51.358 starting I/O failed 00:36:51.358 Write completed with error (sct=0, sc=8) 00:36:51.358 starting I/O failed 00:36:51.358 Write completed with error (sct=0, sc=8) 00:36:51.358 starting I/O failed 00:36:51.358 Read completed with error (sct=0, sc=8) 00:36:51.358 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 [2024-07-10 14:39:00.633865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 [2024-07-10 14:39:00.634544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Write completed with error (sct=0, sc=8) 00:36:51.359 starting I/O failed 00:36:51.359 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Write completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 [2024-07-10 14:39:00.635177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Write completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Write completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Write completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Write completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Read completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Write completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Write completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 Write completed with error (sct=0, sc=8) 00:36:51.360 starting I/O failed 00:36:51.360 [2024-07-10 14:39:00.635851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:51.360 [2024-07-10 14:39:00.636149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.360 [2024-07-10 14:39:00.636199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.360 qpair failed and we were unable to recover it. 00:36:51.360 [2024-07-10 14:39:00.636393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.360 [2024-07-10 14:39:00.636442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.360 qpair failed and we were unable to recover it. 00:36:51.360 [2024-07-10 14:39:00.636642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.360 [2024-07-10 14:39:00.636677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.360 qpair failed and we were unable to recover it. 00:36:51.360 [2024-07-10 14:39:00.636847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.360 [2024-07-10 14:39:00.636881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.360 qpair failed and we were unable to recover it. 00:36:51.360 [2024-07-10 14:39:00.637118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.360 [2024-07-10 14:39:00.637167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.360 qpair failed and we were unable to recover it. 00:36:51.360 [2024-07-10 14:39:00.637365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.360 [2024-07-10 14:39:00.637400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.360 qpair failed and we were unable to recover it. 00:36:51.360 [2024-07-10 14:39:00.637613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.360 [2024-07-10 14:39:00.637649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.360 qpair failed and we were unable to recover it. 00:36:51.360 [2024-07-10 14:39:00.637833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.360 [2024-07-10 14:39:00.637866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.360 qpair failed and we were unable to recover it. 00:36:51.360 [2024-07-10 14:39:00.638072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.360 [2024-07-10 14:39:00.638105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.360 qpair failed and we were unable to recover it. 00:36:51.360 [2024-07-10 14:39:00.638362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.360 [2024-07-10 14:39:00.638395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.360 qpair failed and we were unable to recover it. 00:36:51.360 [2024-07-10 14:39:00.638563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.360 [2024-07-10 14:39:00.638601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.360 qpair failed and we were unable to recover it. 00:36:51.360 [2024-07-10 14:39:00.638757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.360 [2024-07-10 14:39:00.638790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.360 qpair failed and we were unable to recover it. 00:36:51.360 [2024-07-10 14:39:00.639072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.639120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.639361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.639438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.639625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.639659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.639846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.639878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.640056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.640092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.640389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.640456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.640651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.640684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.640917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.640948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.641218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.641290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.641513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.641550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.641767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.641815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.642004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.642039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.642250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.642297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.642489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.642526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.642739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.642802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.643107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.643147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.643368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.643405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.643597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.643630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.643838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.643874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.644104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.644160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.644434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.644468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.644668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.644724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.644968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.645008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.645255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.645303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.645505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.645539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.645714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.645753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.645986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.646019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.646229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.646278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.646460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.646494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.646724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.646775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.647107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.647179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.647396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.361 [2024-07-10 14:39:00.647438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.361 qpair failed and we were unable to recover it. 00:36:51.361 [2024-07-10 14:39:00.647655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.647689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.647867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.647902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.648085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.648119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.648333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.648385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.648584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.648619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.648779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.648823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.649018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.649060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.649264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.649316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.649554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.649598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.649817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.649870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.650087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.650124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.650332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.650368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.650563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.650597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.650799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.650832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.651001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.651034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.651653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.651686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.651900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.651933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.652116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.652149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.652324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.652366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.652584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.652619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.652847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.652881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.653097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.653135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.653346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.653382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.653583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.653617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.653800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.653833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.654021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.654074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.654275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.654312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.654552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.654588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.654797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.654834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.655122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.655156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.655355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.655404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.655588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.655621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.655831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.655864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.656024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.656056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.656257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.656291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.656486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.656520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.656709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.656756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.656940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.656992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.657198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.362 [2024-07-10 14:39:00.657234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.362 qpair failed and we were unable to recover it. 00:36:51.362 [2024-07-10 14:39:00.657429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.657463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.657702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.657770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.657997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.658033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.658217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.658253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.658505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.658541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.658731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.658765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.661481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.661519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.661747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.661786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.662034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.662071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.662326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.662364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.662593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.662627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.662815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.662848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.663041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.663076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.663297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.663350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.663575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.663608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.663815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.663848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.664048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.664083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.664294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.664331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.664578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.664610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.664831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.664864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.665167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.665201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.665447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.665484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.665633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.665665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.667442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.667507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.667729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.667776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.668043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.668078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.668286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.668324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.668593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.668627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.668801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.668834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.669044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.669077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.671440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.671496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.671694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.671736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.672006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.672040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.672302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.672334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.672533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.672578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.672761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.672794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.673004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.673052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.673364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.673401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.673650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.673686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.673890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.363 [2024-07-10 14:39:00.673922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.363 qpair failed and we were unable to recover it. 00:36:51.363 [2024-07-10 14:39:00.674182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.674216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.674388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.674466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.674664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.674696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.674955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.674988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.676442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.676483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.676772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.676813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.677022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.677055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.677239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.677276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.677491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.677525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.677727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.677777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.678086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.678135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.680439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.680479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.680682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.680731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.680940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.680976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.681258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.681292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.681509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.681542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.681736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.681769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.681971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.682004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.682213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.682246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.682436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.682470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.682670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.682704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.682902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.682935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.683138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.683171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.685439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.685481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.685788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.685821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.686075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.686110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.686307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.686339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.686543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.686577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.686801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.686835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.687020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.687052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.687289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.687323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.687508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.687547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.690441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.364 [2024-07-10 14:39:00.690483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.364 qpair failed and we were unable to recover it. 00:36:51.364 [2024-07-10 14:39:00.690730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.690768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.690964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.691001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.691175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.691207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.691438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.691475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.691697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.691754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.691985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.692032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.692237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.692282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.692499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.692547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.692773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.692827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.693014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.693058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.693317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.693366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.693574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.693609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.693800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.693843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.694026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.694060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.694273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.694311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.694468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.694501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.694715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.694757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.694938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.694971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.695139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.695172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.695328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.695361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.695533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.695566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.695706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.695746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.695915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.695948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.696121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.696153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.696297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.696329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.696520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.696553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.696766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.696798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.696983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.697026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.697191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.697232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.697390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.697433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.697588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.697621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.697795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.697827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.698006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.698039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.698222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.698254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.698453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.698486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.698644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.698676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.698831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.698864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.699065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.699097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.699280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.699312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.365 qpair failed and we were unable to recover it. 00:36:51.365 [2024-07-10 14:39:00.699467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.365 [2024-07-10 14:39:00.699500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.699671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.699704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.699909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.699942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.700126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.700158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.700336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.700369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.700554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.700588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.700743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.700776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.700967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.700999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.701180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.701213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.701397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.701445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.701623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.701655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.701876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.701908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.702107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.702144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.702314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.702352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.702580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.702613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.702771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.702808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.702987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.703019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.703184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.703216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.703407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.703448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.703626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.703659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.703829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.703862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.704006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.704056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.704270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.704302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.704499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.704533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.704720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.704764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.704974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.705017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.705195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.705227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.705380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.705416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.705623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.705655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.705853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.705885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.706061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.706093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.706306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.706339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.706524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.706557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.706732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.706765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.706919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.706952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.707103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.707136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.707332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.707364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.707535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.366 [2024-07-10 14:39:00.707567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.366 qpair failed and we were unable to recover it. 00:36:51.366 [2024-07-10 14:39:00.707729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.707761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.707963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.707999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.708221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.708254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.708403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.708448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.708612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.708644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.708840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.708872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.709023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.709056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.709234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.709266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.709420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.709457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.709643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.709675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.709856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.709888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.710095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.710147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.710352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.710385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.710561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.710594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.710770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.710802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.710978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.711011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.711158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.711192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.711372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.711417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.711589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.711622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.711795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.711827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.711982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.712014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.712178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.712211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.712391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.712423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.712609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.712641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.712817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.712849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.713055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.713094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.713253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.713289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.713465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.713500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.713661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.713694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.713883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.713923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.714109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.714143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.714393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.714436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.714599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.714631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.714815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.714854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.715028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.715081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.715268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.715304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.715492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.715525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.715706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.715749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.715951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.715983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.716144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.367 [2024-07-10 14:39:00.716176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.367 qpair failed and we were unable to recover it. 00:36:51.367 [2024-07-10 14:39:00.716359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.716393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.716589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.716622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.716768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.716800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.716991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.717024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.717177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.717210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.717397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.717448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.717606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.717638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.717823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.717856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.718113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.718146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.718323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.718358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.718540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.718594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.718826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.718870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.719057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.719093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.719292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.719327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.719516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.719549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.719747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.719794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.720029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.720061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.720233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.720269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.720511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.720544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.720713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.720752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.720993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.721025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.721242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.721278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.721504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.721541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.721763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.721795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.721980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.722015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.722258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.722297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.722482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.722515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.722712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.722756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.722978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.723013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.723261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.723294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.723507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.723543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.723741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.723776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.724007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.724039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.724245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.368 [2024-07-10 14:39:00.724281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.368 qpair failed and we were unable to recover it. 00:36:51.368 [2024-07-10 14:39:00.724455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.724492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.724662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.724694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.724906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.724955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.725154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.725191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.725391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.725449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.725603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.725635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.725836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.725874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.726060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.726093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.726279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.726314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.726524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.726560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.726792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.726825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.727069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.727102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.727308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.727340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.727524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.727558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.727712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.727751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.727954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.727990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.728166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.728198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.728361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.728397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.728594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.728632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.728884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.728916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.729161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.729193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.729340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.729382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.729577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.729610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.729821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.729857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.730027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.730063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.730248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.730281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.730433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.730484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.730676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.730713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.730914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.730953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.731157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.731207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.731376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.731411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.731654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.731686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.731886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.731924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.732136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.732173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.732370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.732402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.732590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.732627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.732801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.732839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.733045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.733077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.733284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.733321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.733539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.369 [2024-07-10 14:39:00.733576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.369 qpair failed and we were unable to recover it. 00:36:51.369 [2024-07-10 14:39:00.733745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.733781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.733977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.734025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.734217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.734252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.734459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.734494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.734702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.734738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.734937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.734969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.735148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.735180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.735360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.735393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.735552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.735586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.735759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.735794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.736031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.736064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.736250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.736283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.736491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.736524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.736739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.736775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.737031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.737063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.737246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.737278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.737476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.737512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.737680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.737725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.737915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.737948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.738177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.738213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.738414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.738460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.738640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.738673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.738872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.738908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.739110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.739163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.739339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.739371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.739532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.739565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.739762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.739799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.739975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.740018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.740220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.740256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.740475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.740511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.740713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.740755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.740950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.740984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.741173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.741205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.741359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.741393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.741584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.741617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.741810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.741851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.742024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.742056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.742302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.742335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.742543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.742594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.742802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.370 [2024-07-10 14:39:00.742834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.370 qpair failed and we were unable to recover it. 00:36:51.370 [2024-07-10 14:39:00.743060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.743093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.743291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.743328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.743541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.743573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.743780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.743813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.743966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.744002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.744192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.744225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.744405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.744450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.744611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.744643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.744836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.744870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.745138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.745177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.745338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.745370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.745543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.745577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.745822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.745854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.746052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.746088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.746298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.746330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.746609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.746642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.746866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.746917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.747140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.747172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.747391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.747443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.747677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.747709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.747891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.747925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.748132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.748168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.748356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.748392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.748620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.748653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.748878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.748911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.749129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.749199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.749409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.749451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.749611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.749643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.749836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.749872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.750069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.750101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.750256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.750288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.750473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.750506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.750700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.750740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.750961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.750998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.751175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.751211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.751385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.751419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.751596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.751631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.751837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.751872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.752073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.752107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.371 qpair failed and we were unable to recover it. 00:36:51.371 [2024-07-10 14:39:00.752295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.371 [2024-07-10 14:39:00.752331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.752532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.752568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.752736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.752769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.752930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.752962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.753187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.753223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.753433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.753474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.753673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.753708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.753926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.753958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.754106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.754139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.754336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.754384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.754611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.754643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.754830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.754866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.755040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.755076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.755238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.755273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.755495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.755528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.755719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.755762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.755990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.756022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.756231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.756264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.756444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.756480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.756636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.756671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.756885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.756918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.757113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.757149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.757304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.757340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.757522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.757555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.757717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.757754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.757939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.757976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.758127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.758159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.758298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.758338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.758523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.758556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.758764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.758796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.759021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.759056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.759234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.759266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.759454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.759487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.759660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.759696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.759897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.759930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.760104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.760136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.760291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.760323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.760476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.760508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.760686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.760719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.372 [2024-07-10 14:39:00.760883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.372 [2024-07-10 14:39:00.760915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.372 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.761096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.761130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.761323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.761356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.761543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.761576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.761758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.761790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.762003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.762036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.762206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.762248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.762469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.762502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.762681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.762713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.762888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.762923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.763131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.763163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.763344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.763387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.763584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.763622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.763822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.763856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.764053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.764091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.764261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.764295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.764503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.764537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.764742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.764774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.764967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.764999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.765178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.765210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.765404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.765457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.765610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.765644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.765837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.765895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.766114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.766150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.766298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.766340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.766505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.766538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.766701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.766742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.766912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.766946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.767114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.767147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.768024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.768062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.768246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.768279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.768487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.768520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.768694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.768726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.768921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.768953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.769130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.769162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.769362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.769394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.373 qpair failed and we were unable to recover it. 00:36:51.373 [2024-07-10 14:39:00.769567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.373 [2024-07-10 14:39:00.769599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.769755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.769789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.769967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.769999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.770150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.770183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.770337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.770369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.770573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.770607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.770786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.770818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.770990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.771022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.771184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.771216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.771394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.771443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.771622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.771655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.771833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.771868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.772043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.772076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.772237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.772274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.772457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.772490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.772639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.772672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.772862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.772901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.773057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.773090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.773295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.773327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.773535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.773568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.773719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.773760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.773950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.773983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.774160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.774193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.774375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.774422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.774609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.774642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.774827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.774859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.775061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.775093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.775287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.775320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.775541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.775573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.775732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.775765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.775991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.776024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.776204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.776236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.776437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.776469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.776626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.776659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.776823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.776855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.777038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.777071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.777279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.777311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.777464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.777497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.777644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.777676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.777866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.777908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.374 [2024-07-10 14:39:00.778099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.374 [2024-07-10 14:39:00.778132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.374 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.778290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.778322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.778522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.778555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.778711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.778744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.778933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.778965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.779143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.779175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.779326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.779361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.779572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.779606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.779783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.779818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.779985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.780017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.780168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.780200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.780353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.780387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.780579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.780612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.780803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.780836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.781016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.781048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.781220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.781257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.781431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.781467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.781667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.781699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.781911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.781943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.782123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.782155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.782306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.782338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.782520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.782553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.782753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.782789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.782964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.782996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.783166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.783198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.783436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.783469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.783624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.783656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.783839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.783875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.784067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.784099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.784280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.784330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.784538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.784571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.784747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.784779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.784935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.784967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.785170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.785206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.785403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.785448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.785595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.785627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.785795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.785827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.785971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.786003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.786183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.786215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.786392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.786431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.375 qpair failed and we were unable to recover it. 00:36:51.375 [2024-07-10 14:39:00.786578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.375 [2024-07-10 14:39:00.786610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.786771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.786803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.786969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.787001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.787280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.787312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.787502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.787534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.787771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.787807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.788030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.788062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.788226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.788260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.788484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.788517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.788782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.788822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.789030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.789064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.789303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.789351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.789560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.789591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.789778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.789813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.789978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.790010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.790189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.790221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.790394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.790435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.790600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.790632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.790819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.790851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.791009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.791041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.791212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.791244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.791403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.791449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.791641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.791673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.791850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.791892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.792047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.792080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.792234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.792267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.792448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.792481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.792656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.792689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.792875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.792907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.793085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.793118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.793299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.793331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.793493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.793526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.793704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.793736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.793953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.793985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.794137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.794169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.794347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.794379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.794567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.794599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.794758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.794790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.794953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.794985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.795135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.795168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.795345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.376 [2024-07-10 14:39:00.795394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.376 qpair failed and we were unable to recover it. 00:36:51.376 [2024-07-10 14:39:00.795627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.795664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.795860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.795894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.796091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.796125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.796297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.796330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.796507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.796540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.796695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.796729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.796905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.796937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.797112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.797144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.797297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.797329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.797494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.797528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.797711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.797753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.797909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.797941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.798091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.798123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.798275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.798306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.798569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.798603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.798762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.798809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.798970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.799003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.799185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.799218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.799373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.799418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.799610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.799644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.799818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.799852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.800004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.800036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.800214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.800247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.800401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.800442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.800625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.800659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.800838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.800872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.801060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.801094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.801273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.801310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.801470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.801504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.801664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.801697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.801918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.801950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.802130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.802167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.802385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.802418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.802608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.802640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.802808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.377 [2024-07-10 14:39:00.802841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.377 qpair failed and we were unable to recover it. 00:36:51.377 [2024-07-10 14:39:00.803016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.803052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.803215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.803249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.803420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.803458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.803613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.803646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.803809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.803842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.803994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.804027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.804221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.804255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.804461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.804493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.804650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.804682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.804857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.804893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.805064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.805097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.805254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.805287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.805550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.805597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.805799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.805841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.806062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.806095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.806301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.806358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.806568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.806605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.806782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.806815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.807026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.807069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.807270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.807308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.807503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.807543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.807702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.807744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.807928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.807960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.808114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.808146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.808343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.808376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.808565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.808598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.808781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.808813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.809009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.809045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.809241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.809276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.809472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.809505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.809698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.809746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.809950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.809986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.810160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.810195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.810398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.810453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.810632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.810666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.810845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.810878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.811082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.811137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.811319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.811355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.811553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.811586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.811770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.811807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.378 [2024-07-10 14:39:00.811999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.378 [2024-07-10 14:39:00.812033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.378 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.812231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.812264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.812438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.812471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.812620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.812652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.812855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.812887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.813068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.813100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.813277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.813312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.813531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.813574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.813733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.813766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.813954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.813987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.814135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.814167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.814346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.814378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.814537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.814569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.814714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.814747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.814908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.814942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.815128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.815165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.815336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.815369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.815575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.815626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.815825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.815884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.816092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.816125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.816358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.816400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.816605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.816639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.816826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.816859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.817108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.817141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.817294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.817326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.817513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.817546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.817725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.817758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.817946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.817978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.818161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.818193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.818347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.818379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.818579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.818612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.818851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.818884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.819070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.819102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.819242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.819290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.819510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.819544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.819717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.819768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.819970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.820008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.820179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.820216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.820422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.820464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.820628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.379 [2024-07-10 14:39:00.820661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.379 qpair failed and we were unable to recover it. 00:36:51.379 [2024-07-10 14:39:00.820831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.820865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.821018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.821068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.821259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.821293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.821469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.821503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.821661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.821696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.821878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.821913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.822104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.822137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.822341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.822378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.822557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.822591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.822747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.822795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.822963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.822997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.823150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.823186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.823377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.823410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.823599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.823631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.823777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.823810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.823968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.824001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.824174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.824207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.824387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.824421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.824582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.824615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.824796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.824832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.824992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.825034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.825253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.825286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.825461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.825494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.825653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.825686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.825877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.825910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.380 [2024-07-10 14:39:00.826088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.380 [2024-07-10 14:39:00.826121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.380 qpair failed and we were unable to recover it. 00:36:51.655 [2024-07-10 14:39:00.826299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-07-10 14:39:00.826331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-07-10 14:39:00.826487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-07-10 14:39:00.826523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-07-10 14:39:00.826686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-07-10 14:39:00.826719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-07-10 14:39:00.826891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-07-10 14:39:00.826923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-07-10 14:39:00.827125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-07-10 14:39:00.827157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-07-10 14:39:00.827375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-07-10 14:39:00.827416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-07-10 14:39:00.827628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-07-10 14:39:00.827661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-07-10 14:39:00.827835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-07-10 14:39:00.827868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-07-10 14:39:00.828050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-07-10 14:39:00.828083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-07-10 14:39:00.828230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-07-10 14:39:00.828263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-07-10 14:39:00.828469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-07-10 14:39:00.828502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-07-10 14:39:00.828654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-07-10 14:39:00.828689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-07-10 14:39:00.828844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-07-10 14:39:00.828878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-07-10 14:39:00.829040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-07-10 14:39:00.829073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.829249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.829282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.829438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.829471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.829620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.829654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.829908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.829945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.830154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.830202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.830365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.830398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.830595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.830628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.830849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.830884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.831064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.831098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.831260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.831300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.831519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.831552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.831732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.831765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.831960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.832011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.832222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.832263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.832466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.832500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.832697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.832749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.832949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.832982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.833140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.833172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.833318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.833352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.833523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.833559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.833817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.833860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.834044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.834078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.834235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.834270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.834421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.834463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.834613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.834646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.834808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.834845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.834998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.835035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.835236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.835273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.835443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.835479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.835679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.835711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.835883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.835917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.836073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.836108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.836283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.836316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.836528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.836565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.836738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.836774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.836976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.837009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.837191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.837229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.837448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.837483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.837640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.837672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-07-10 14:39:00.837887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-07-10 14:39:00.837943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.838119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.838156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.838341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.838373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.838558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.838592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.838740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.838772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.838930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.838962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.839172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.839204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.839435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.839487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.839640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.839672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.839912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.839949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.840116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.840150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.840368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.840401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.840565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.840597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.840795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.840830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.841035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.841068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.841235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.841273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.841486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.841520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.841678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.841711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.841905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.841941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.842091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.842144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.842364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.842397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.842581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.842618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.842827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.842860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.843003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.843035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.843242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.843292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.843498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.843531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.843730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.843763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.843964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.843998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.844235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.844271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.844448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.844482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.844655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.844688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.844907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.844942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.845152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.845185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.845361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.845393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.845557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.845589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.845781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.845814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.845993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.846027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.846216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.846248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.846459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.846492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.846666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.846717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-07-10 14:39:00.846939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-07-10 14:39:00.846975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.847195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.847227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.847468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.847511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.847657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.847689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.847871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.847904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.848112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.848166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.848393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.848448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.848642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.848674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.848880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.848931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.849157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.849192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.849384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.849416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.849603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.849635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.849848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.849883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.850112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.850144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.850310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.850346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.850546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.850579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.850734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.850766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.850965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.851001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.851166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.851204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.851404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.851444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.851637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.851670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.851884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.851927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.852096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.852128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.852300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.852332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.852500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.852533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.852687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.852719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.852913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.852948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.853112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.853149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.853351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.853384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.853576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.853610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.853837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.853873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.854073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.854105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.854306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.854342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.854555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.854589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.854772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.854804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.855081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.855153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.855346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.855386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.855576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.855610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.855817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.855854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.856032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.856068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.856279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-07-10 14:39:00.856312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-07-10 14:39:00.856549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.856583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.856765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.856798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.857017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.857050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.857291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.857327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.857535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.857568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.857747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.857780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.858074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.858134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.858359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.858395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.858584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.858617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.858821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.858870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.859096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.859132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.859333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.859365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.859575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.859608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.859766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.859798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.860004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.860041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.860206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.860242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.860464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.860497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.860645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.860677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.860820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.860852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.861052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.861103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.861309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.861340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.861607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.861640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.861809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.861841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.862042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.862074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.862289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.862337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.862512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.862546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.862726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.862767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.863038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.863074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.863258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.863295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.863503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.863537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.863706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.863740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.863919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.863962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.864104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.864136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.864317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.864351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-07-10 14:39:00.864551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-07-10 14:39:00.864585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.864767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.864799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.864976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.865008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.865235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.865271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.865446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.865479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.865654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.865686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.865872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.865904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.866057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.866090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.866310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.866346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.866556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.866589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.866792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.866824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.867031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.867064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.867274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.867310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.867509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.867546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.867736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.867773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.867972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.868008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.868203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.868235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.868387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.868443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.868636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.868668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.868859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.868891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.869122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.869157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.869354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.869386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.869572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.869604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.869803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.869839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.870061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.870097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.870326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.870358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.870559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.870592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.870757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.870789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.871007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.871039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.871191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.871223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.871422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.871462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.871619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.871652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.871848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.871880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.872084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.872120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.872323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.872355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.872578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.872611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.872843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.872879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.873077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.873111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.873311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.873347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.873532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-07-10 14:39:00.873565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-07-10 14:39:00.873770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.873803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.873979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.874011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.874164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.874215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.874389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.874422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.874620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.874652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.874866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.874898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.875052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.875085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.875269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.875302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.875447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.875497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.875699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.875731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.875898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.875930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.876113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.876146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.876287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.876318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.876467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.876521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.876742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.876777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.876961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.876993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.877159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.877191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.877373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.877405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.877626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.877658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.877841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.877878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.878097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.878133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.878301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.878343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.878548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.878585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.878764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.878796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.878998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.879030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.879233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.879269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.879486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.879522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.879765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.879797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.879988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.880020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.880170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.880203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.880380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.880414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.880631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.880668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.880843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.880875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.881042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.881075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.881232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.881267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.881433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.881470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.881699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.881731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.881933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.881965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.882133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.882169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.882379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.882415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-07-10 14:39:00.882649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-07-10 14:39:00.882682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.882870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.882902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.883141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.883174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.883417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.883459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.883648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.883695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.883901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.883933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.884082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.884115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.884295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.884331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.884525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.884557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.884749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.884785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.884954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.884990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.885165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.885197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.885443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.885479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.885644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.885688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.885897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.885929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.886095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.886131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.886321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.886353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.886512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.886544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.886717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.886749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.886922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.886954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.887134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.887165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.887344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.887377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.887542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.887575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.887730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.887762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.887915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.887947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.888117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.888149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.888321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.888353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.888527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.888561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.888736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.888773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.888943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.888975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.889131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.889162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.889343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.889375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.889532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.889565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.889756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.889792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.889992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.890027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.890228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.890260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.890437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.890470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.890619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.890651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.890833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.890865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.891023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.891057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.891266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-07-10 14:39:00.891301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-07-10 14:39:00.891493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.891525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.891671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.891720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.891909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.891945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.892119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.892152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.892292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.892355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.892573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.892610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.892813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.892851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.893047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.893079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.893252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.893288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.893469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.893501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.893708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.893753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.893952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.893988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.894165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.894202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.894385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.894421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.894660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.894695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.894902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.894934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.895112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.895157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.895317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.895352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.895564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.895596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.895794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.895830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.896071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.896103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.896252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.896284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.896522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.896560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.896731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.896767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.896973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.897007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.897197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.897243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.897440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.897478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.897671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.897712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.897866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.897898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.898066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.898098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.898255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.898287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.898498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.898534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.898736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.898772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.898945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.898978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.899147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.899183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.899382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.899415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.899573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.899605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.899749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.899781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.899962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.899994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.900152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.900184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.900346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-07-10 14:39:00.900382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-07-10 14:39:00.900619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.900652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.900829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.900861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.901038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.901076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.901246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.901281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.901452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.901484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.901691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.901727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.901922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.901955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.902097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.902129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.902343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.902379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.902603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.902639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.902840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.902872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.903043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.903086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.903302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.903337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.903550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.903584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.903762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.903802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.904001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.904050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.904280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.904312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.904522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.904558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.904738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.904770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.904951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.904983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.905144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.905179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.905375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.905414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.905623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.905655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.905859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.905895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.906065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.906101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.906306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.906340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.906550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.906587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.906793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.906836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.907028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.907060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.907264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.907299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.907507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.907545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.907724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.907756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.907954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.907990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.908163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.908200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.908395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-07-10 14:39:00.908440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-07-10 14:39:00.908646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.908682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.908859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.908895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.909123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.909155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.909395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.909445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.909607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.909643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.909816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.909850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.910075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.910111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.910340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.910376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.910601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.910633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.910812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.910848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.911087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.911123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.911346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.911378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.911547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.911580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.911753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.911785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.911980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.912012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.912210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.912245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.912401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.912474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.912695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.912729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.912886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.912918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.913092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.913124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.913313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.913345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.913512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.913549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.913776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.913812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.914016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.914048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.914242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.914278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.914471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.914507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.914680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.914724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.914923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.914958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.915176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.915212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.915383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.915419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.915655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.915692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.915899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.915931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.916128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.916160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.916353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.916389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.916634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.916667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.916853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.916885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.917084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.917134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.917354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.917389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.917602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.917635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.917842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.917878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.918066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-07-10 14:39:00.918102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-07-10 14:39:00.918328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.918360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.918541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.918577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.918777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.918813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.919015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.919047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.919269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.919305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.919536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.919573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.919775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.919807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.919987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.920019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.920225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.920261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.920471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.920504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.920724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.920760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.920962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.920998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.921204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.921237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.921420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.921466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.921666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.921698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.921848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.921894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.922094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.922143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.922314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.922352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.922590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.922623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.922842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.922874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.923070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.923105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.923270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.923302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.923476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.923514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.923717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.923751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.923932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.923964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.924147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.924179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.924351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.924383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.924573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.924606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.924748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.924780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.924974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.925006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.925239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.925271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.925485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.925519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.925732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.925779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.926018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.926051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.926256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.926292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.926498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.926536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.926721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.926754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.926937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.926973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.927175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.927208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.927360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.927394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-07-10 14:39:00.927609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-07-10 14:39:00.927642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.927858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.927891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.928069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.928102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.928326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.928362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.928579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.928616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.928811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.928844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.929016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.929052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.929208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.929246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.929468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.929502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.929712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.929757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.929964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.929996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.930193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.930225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.930395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.930443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.930656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.930692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.930880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.930912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.931091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.931128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.931336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.931372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.931603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.931635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.931874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.931910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.932115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.932152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.932377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.932417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.932652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.932688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.932882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.932918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.933096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.933128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.933331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.933380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.933622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.933658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.933841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.933873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.934050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.934082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.934263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.934295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.934499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.934532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.934705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.934742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.934936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.934972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.935152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.935184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.935419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.935463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.935624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.935659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.935847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.935879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.936060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.936095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.936321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.936357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.936572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.936605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.936797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.936844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.937037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-07-10 14:39:00.937072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-07-10 14:39:00.937273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-07-10 14:39:00.937305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-07-10 14:39:00.937531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-07-10 14:39:00.937568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-07-10 14:39:00.937798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-07-10 14:39:00.937833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-07-10 14:39:00.938032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-07-10 14:39:00.938063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-07-10 14:39:00.938282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-07-10 14:39:00.938317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-07-10 14:39:00.938492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-07-10 14:39:00.938529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-07-10 14:39:00.938700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-07-10 14:39:00.938734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-07-10 14:39:00.938922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-07-10 14:39:00.938957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-07-10 14:39:00.939169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-07-10 14:39:00.939201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-07-10 14:39:00.939386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-07-10 14:39:00.939418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-07-10 14:39:00.939576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-07-10 14:39:00.939608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-07-10 14:39:00.939849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-07-10 14:39:00.939884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-07-10 14:39:00.940089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-07-10 14:39:00.940121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-07-10 14:39:00.940289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-07-10 14:39:00.940321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-07-10 14:39:00.940531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-07-10 14:39:00.940573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-07-10 14:39:00.940747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-07-10 14:39:00.940779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-07-10 14:39:00.940948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-07-10 14:39:00.940984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-07-10 14:39:00.941187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-07-10 14:39:00.941222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-07-10 14:39:00.941422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-07-10 14:39:00.941463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-07-10 14:39:00.941731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.941766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.941968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.942004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.942200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.942232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.942402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.942448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.942644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.942680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.942870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.942902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.943044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.943076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.943309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.943344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.943525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.943559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.943781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.943817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.943987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.944022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.944207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.944240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.944508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.944544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.944754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.944790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.944956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.944990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.945186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.945218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.945419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.945462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.945685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.945718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.945895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.945931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.946123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.946159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.946337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.946371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.946563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.946595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.946759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.946796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-07-10 14:39:00.947029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-07-10 14:39:00.947061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.947293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.947329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.947538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.947574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.947805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.947837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.948036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.948072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.948262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.948299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.948487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.948520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.948685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.948721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.948893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.948925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.949075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.949107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.949291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.949323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.949488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.949520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.949694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.949730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.949925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.949960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.950149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.950184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.950360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.950392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.950578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.950612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.950789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.950827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.951015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.951047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.951225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.951257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.951513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.951556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.951734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.951767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.951959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.951994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.952187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.952223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.952436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.952469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.952693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.952728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.952954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.952986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.953189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.953220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.953423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.953467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.953691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.953726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.953915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.953948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.954144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.954179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.954382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.954418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.954574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.954606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.954808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.954843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.955013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.955045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.955197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.955230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.955401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.955446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.955656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.955690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.955878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.955910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.956105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.956140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-07-10 14:39:00.956332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-07-10 14:39:00.956367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.956590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.956623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.956855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.956890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.957047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.957082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.957278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.957310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.957510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.957547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.957715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.957751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.957952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.957984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.958160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.958192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.958338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.958370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.958580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.958613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.958825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.958861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.959032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.959082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.959277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.959309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.959463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.959495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.959670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.959702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.959937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.959969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.960198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.960233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.960398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.960445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.960642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.960675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.960883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.960918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.961112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.961147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.961336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.961368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.961537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.961568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.961789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.961824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.962052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.962084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.962251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.962286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.962478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.962514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.962715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.962747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.962920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.962952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.963162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.963197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.963392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.963430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.963612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.963647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.963855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.963890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.964091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.964125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.964281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.964314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.964492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.964525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.964744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.964776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.964933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.964966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.965171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.965207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.965445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-07-10 14:39:00.965478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-07-10 14:39:00.965683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.965718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.965907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.965942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.966123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.966166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.966345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.966377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.966609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.966645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.966848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.966880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.967085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.967127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.967326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.967363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.967569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.967602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.967795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.967843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.968041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.968082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.968258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.968291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.968487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.968523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.968743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.968778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.968975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.969007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.969206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.969242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.969454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.969490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.969678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.969718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.969908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.969943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.970129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.970164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.970369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.970401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.970617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.970653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.970849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.970885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.971087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.971119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.971352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.971388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.971602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.971635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.971839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.971872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.972073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.972109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.972301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.972336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.972543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.972576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.972769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.972805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.972996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.973032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.973210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.973243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.973438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.973486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.973653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.973688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.973914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.973946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.974177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.974212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.974418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.974462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.974695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.974727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.974926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.974962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-07-10 14:39:00.975167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-07-10 14:39:00.975201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.975388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.975421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.975592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.975624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.975830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.975866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.976038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.976070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.976242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.976274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.976455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.976508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.976725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.976757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.976943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.976975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.977124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.977157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.977340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.977372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.977540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.977572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.977746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.977778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.977982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.978014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.978228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.978260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.978450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.978486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.978688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.978720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.978870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.978902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.979122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.979158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.979353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.979385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.979578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.979610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.979837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.979873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.980083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.980115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.980315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.980351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.980561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.980598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.980797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.980830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.980985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.981047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.981278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.981314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.981528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.981561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.981786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.981822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.982020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.982056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.982281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.982314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.982506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.982542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.982736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.982774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.982940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.982973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.983152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-07-10 14:39:00.983184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-07-10 14:39:00.983354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.983389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.983601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.983637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.983881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.983914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.984089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.984122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.984340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.984372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.984586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.984619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.984771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.984803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.985001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.985033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.985187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.985237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.985400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.985448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.985626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.985658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.985839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.985872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.986104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.986139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.986314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.986345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.986532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.986565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.986722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.986754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.986970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.987002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.987241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.987281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.987510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.987546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.987725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.987757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.987953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.987988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.988155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.988191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.988416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.988465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.988674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.988718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.988945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.988980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.989183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.989215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.989369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.989401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.989622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.989657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.989861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.989893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.990138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.990171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.990372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.990405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.990661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.990693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.990872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.990907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.991141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.991173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.991349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.991383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.991604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.991636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.991846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.991882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.992075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.992107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.992259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.992291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.992483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.992519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-07-10 14:39:00.992720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-07-10 14:39:00.992753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.992907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.992945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.993168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.993205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.993400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.993450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.993691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.993727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.993903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.993939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.994166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.994198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.994417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.994456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.994636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.994668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.994845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.994876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.995102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.995138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.995334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.995371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.995590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.995623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.995777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.995819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.995994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.996036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.996208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.996241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.996472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.996521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.996730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.996766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.996962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.996994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.997208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.997240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.997435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.997471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.997697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.997733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.997928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.997964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.998151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.998186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.998395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.998433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.998636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.998672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.998903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.998939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.999162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.999194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.999400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.999445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.999609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.999645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:00.999872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:00.999904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:01.000097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:01.000132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:01.000337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:01.000374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:01.000605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:01.000638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:01.000839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:01.000875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:01.001048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:01.001085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:01.001262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:01.001294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:01.001513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:01.001546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:01.001705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:01.001756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:01.001954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:01.001986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:01.002153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-07-10 14:39:01.002186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-07-10 14:39:01.002378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.002421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.002630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.002663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.002894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.002929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.003096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.003131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.003352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.003387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.003629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.003661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.003833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.003869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.004068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.004100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.004302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.004337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.004537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.004573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.004738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.004770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.004967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.005002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.005211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.005248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.005451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.005484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.005719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.005755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.005949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.005987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.006187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.006219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.006396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.006434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.006658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.006696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.006872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.006905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.007111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.007147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.007334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.007370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.007584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.007616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.007757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.007792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.007982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.008017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.008221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.008253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.008449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.008498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.008677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.008713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.008907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.008939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.009111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.009147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.009343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.009378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.009594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.009627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.009830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.009862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.010018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.010050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.010194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.010227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.010437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.010470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.010664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.010709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.010880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.010924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.011103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.011135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.011330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.011367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.011584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-07-10 14:39:01.011622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-07-10 14:39:01.011832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.011867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.012057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.012092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.012324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.012360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.012566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.012599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.012784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.012820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.012995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.013027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.013204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.013237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.013416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.013463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.013645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.013677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.013879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.013915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.014115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.014147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.014349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.014381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.014566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.014599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.014779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.014816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.015045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.015078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.015271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.015307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.015476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.015512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.015711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.015743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.015940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.015984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.016143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.016179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.016359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.016391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.016604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.016636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.016805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.016841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.017066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.017098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.017293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.017325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.017498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.017531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.017748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.017780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.017953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.017985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.018205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.018241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.018441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.018473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.018652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.018685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.018861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.018893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.019037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.019069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.019260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.019297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.677 [2024-07-10 14:39:01.019526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.677 [2024-07-10 14:39:01.019561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.677 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.019773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.019807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.019968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.020000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.020201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.020233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.020384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.020416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.020606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.020643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.020816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.020848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.021009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.021041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.021215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.021247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.021416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.021464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.021605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.021637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.021823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.021857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.022012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.022046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.022249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.022282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.022435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.022468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.022617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.022649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.022826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.022859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.023037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.023069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.023268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.023300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.023501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.023534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.023680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.023712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.023865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.023897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.024084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.024116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.024292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.024325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.024503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.024536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.024696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.024732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.024916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.024960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.025166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.025198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.025413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.025458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.025682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.025714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.025910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.025942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.026121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.026154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.026365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.026398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.026575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.026607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.026791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.026823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.027022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.027054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.027231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.027263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.027443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.027476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.027643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.027676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.027830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.027864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.028072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.028104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.678 qpair failed and we were unable to recover it. 00:36:51.678 [2024-07-10 14:39:01.028283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.678 [2024-07-10 14:39:01.028315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.028502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.028535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.028716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.028748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.028935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.028968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.029175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.029211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.029422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.029490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.029664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.029696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.029892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.029924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.030130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.030163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.030340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.030372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.030554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.030587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.030759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.030792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.030964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.030996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.031206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.031238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.031435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.031468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.031618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.031650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.031836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.031869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.032088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.032121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.032292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.032325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.032493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.032526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.032702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.032734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.032920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.032953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.033136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.033172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.033375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.033418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.033638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.033674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.033850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.033887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.034062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.034094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.034258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.034294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.034482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.034515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.034671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.034703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.034858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.034897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.035141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.035187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.035422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.035462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.035641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.035676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.035854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.035890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.036084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.036117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.036296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.036332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.036510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.036546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.036723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.036755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.036969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.037015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.037174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.679 [2024-07-10 14:39:01.037212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.679 qpair failed and we were unable to recover it. 00:36:51.679 [2024-07-10 14:39:01.037420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.037470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.037703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.037749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.037950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.037982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.038177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.038214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.038411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.038453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.038656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.038693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.038865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.038897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.039089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.039121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.039307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.039349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.039530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.039563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.039735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.039767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.039949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.039986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.040224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.040256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.040461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.040494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.040725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.040762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.041006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.041038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.041247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.041297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.041515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.041548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.041739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.041772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.041977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.042013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.042206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.042242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.042455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.042488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.042654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.042690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.042870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.042907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.043107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.043139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.043284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.043316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.043503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.043540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.043743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.043775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.043950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.043982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.044177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.044213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.044395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.044442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.044599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.044631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.044814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.044846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.045079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.045111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.045338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.045373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.045590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.045626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.045819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.045851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.046025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.046061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.046257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.046292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.046491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.046525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.046715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.680 [2024-07-10 14:39:01.046752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.680 qpair failed and we were unable to recover it. 00:36:51.680 [2024-07-10 14:39:01.046946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.046977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.047152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.047184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.047381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.047419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.047631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.047664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.047852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.047884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.048116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.048149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.048336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.048370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.048552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.048584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.048778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.048818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.049019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.049051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.049228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.049261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.049478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.049512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.049675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.049730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.049916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.049948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.050145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.050180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.050334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.050369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.050581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.050614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.050841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.050875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.051063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.051096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.051262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.051294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.051488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.051521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.051706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.051740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.051960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.051991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.052154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.052187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.052337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.052371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.052582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.052615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.052769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.052801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.052945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.052994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.053222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.053254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.053420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.053460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.053631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.053663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.053815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.053857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.054040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.054072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.054218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.054251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.054431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.054464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.054617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.054649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.054856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.054888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.055037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.055069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.055242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.055274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.055451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.055484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.681 [2024-07-10 14:39:01.055646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.681 [2024-07-10 14:39:01.055678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.681 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.055871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.055902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.056079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.056114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.056292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.056325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.056534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.056567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.056739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.056771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.056938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.056970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.057142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.057174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.057376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.057408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.057588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.057622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.057784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.057816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.057967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.058001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.058203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.058235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.058418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.058456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.058629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.058661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.058840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.058872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.059024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.059056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.059203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.059236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.059393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.059433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.059586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.059618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.059822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.059853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.060004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.060036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.060207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.060240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.060423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.060472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.060651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.060683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.060846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.060878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.061069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.061101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.061288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.061320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.061492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.061525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.061703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.061736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.061942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.061974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.062155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.062188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.062361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.062392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.062549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.062581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.062755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.062787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.062968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.063005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.682 qpair failed and we were unable to recover it. 00:36:51.682 [2024-07-10 14:39:01.063176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.682 [2024-07-10 14:39:01.063208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.063381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.063421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.063597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.063629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.063768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.063800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.063956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.063989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.064193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.064225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.064366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.064402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.064568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.064601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.064805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.064837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.064985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.065018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.065200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.065232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.065406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.065445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.065600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.065632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.065811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.065843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.066017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.066049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.066202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.066235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.066403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.066443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.066595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.066628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.066778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.066811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.066986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.067028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.067201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.067233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.067408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.067447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.067620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.067651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.067879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.067911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.068065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.068097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.068276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.068308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.068516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.068549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.068703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.068735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.068918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.068950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.069122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.069154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.069354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.069386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.069558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.069590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.069743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.069775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.069986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.070038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.070212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.070249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.070457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.070491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.070703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.070742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.070943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.070977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.071160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.071193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.071335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.071369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.071522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.683 [2024-07-10 14:39:01.071555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.683 qpair failed and we were unable to recover it. 00:36:51.683 [2024-07-10 14:39:01.071732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.071783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.072042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.072077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.072279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.072321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.072536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.072569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.072726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.072759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.072985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.073025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.073187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.073223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.073398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.073440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.073618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.073650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.073825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.073861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.074085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.074121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.074319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.074354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.074593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.074625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.074786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.074818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.075028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.075062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.075263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.075299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.075501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.075534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.075738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.075770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.076034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.076093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.076297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.076333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.076578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.076611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.076769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.076801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.076956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.076988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.077171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.077224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.077415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.077458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.077695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.077727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.077954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.078003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.078221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.078259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.078493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.078526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.078724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.078760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.078963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.078999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.079168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.079204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.079402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.079461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.079612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.079644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.079902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.079961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.080158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.080194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.080371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.080403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.080598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.080647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.080838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.080875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.081085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.081139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.684 [2024-07-10 14:39:01.081329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.684 [2024-07-10 14:39:01.081362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.684 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.081555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.081589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.081784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.081837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.082041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.082093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.082278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.082313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.082496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.082536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.082722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.082765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.082985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.083037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.083256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.083289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.083477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.083510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.083664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.083697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.083889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.083924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.084105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.084138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.084290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.084323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.084501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.084536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.084684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.084717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.084906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.084939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.085122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.085155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.085336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.085370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.085570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.085604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.085782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.085834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.086067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.086101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.086281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.086315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.086521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.086573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.086812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.086864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.087016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.087050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.087228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.087261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.087440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.087480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.087657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.087709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.087921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.087972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.088152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.088186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.088362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.088396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.088600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.088654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.088857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.088908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.089118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.089168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.089355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.089387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.089675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.089726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.089899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.089950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.090184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.090235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.090418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.090471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.090745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.090780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.685 qpair failed and we were unable to recover it. 00:36:51.685 [2024-07-10 14:39:01.090993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.685 [2024-07-10 14:39:01.091050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.091202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.091235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.091441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.091478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.091677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.091729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.091935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.091986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.092217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.092275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.092484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.092535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.092768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.092819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.092993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.093026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.093180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.093213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.093379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.093412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.093679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.093733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.094006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.094058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.094256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.094288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.094482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.094534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.094733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.094770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.095001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.095053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.095266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.095299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.095495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.095546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.095756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.095790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.095971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.096005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.096145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.096179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.096347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.096381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.096613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.096646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.096825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.096876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.097073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.097123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.097273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.097306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.097514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.097565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.097797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.097831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.098036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.098069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.098248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.098281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.098476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.098518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.098710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.098760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.098940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.098990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.099191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.099223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.099391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.686 [2024-07-10 14:39:01.099434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.686 qpair failed and we were unable to recover it. 00:36:51.686 [2024-07-10 14:39:01.099634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.099684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.099852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.099903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.100146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.100197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.100423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.100482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.100686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.100738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.101011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.101067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.101358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.101391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.101615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.101667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.101892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.101943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.102238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.102305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.102528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.102580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.102856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.102907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.103130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.103181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.103338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.103371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.103572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.103624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.103860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.103916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.104123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.104175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.104354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.104387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.104559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.104610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.104845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.104896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.105100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.105152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.105307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.105340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.105530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.105581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.105792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.105842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.106051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.106101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.106249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.106282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.106441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.106474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.106756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.106809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.107036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.107096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.107302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.107335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.107560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.107611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.107778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.107829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.107993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.108043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.108254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.108287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.108463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.108496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.108727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.108780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.109051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.109102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.109288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.109321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.109556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.109608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.110100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.110138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.687 [2024-07-10 14:39:01.110317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.687 [2024-07-10 14:39:01.110351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.687 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.110571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.110624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.110816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.110866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.111039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.111091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.111269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.111303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.111486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.111540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.111716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.111767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.111935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.111985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.112164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.112197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.112402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.112442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.112638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.112694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.112913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.112947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.113151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.113185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.113368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.113402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.113580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.113633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.113842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.113893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.114110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.114145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.114304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.114338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.114535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.114589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.114817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.114854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.115082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.115134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.115314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.115347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.115542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.115594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.115779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.115830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.116040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.116091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.116282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.116315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.116499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.116550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.116735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.116786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.116985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.117036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.117248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.117282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.117434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.117475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.117663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.117714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.117892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.688 [2024-07-10 14:39:01.117946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.688 qpair failed and we were unable to recover it. 00:36:51.688 [2024-07-10 14:39:01.118103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.118138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.118318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.118351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.118543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.118603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.118872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.118923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.119151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.119205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.119359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.119393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.119581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.119633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.119804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.119856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.120024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.120074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.120246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.120279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.120476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.120514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.120732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.120783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.120961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.121012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.121177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.121210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.121392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.121431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.121609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.121661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.121843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.121894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.122131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.122186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.122358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.122396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.122597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.122634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.122805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.122843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.123042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.123079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.123272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.123309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.123482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.123516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.123698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-07-10 14:39:01.123751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-07-10 14:39:01.124025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.124061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.124253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.124289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.124526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.124559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.124744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.124780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.124972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.125009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.125202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.125240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.125467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.125514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.125680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.125732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.125986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.126022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.126221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.126257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.126483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.126518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.126672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.126722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.126946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.127002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.127211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.127247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.127437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.127478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.127637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.127671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.127893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.127929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.128120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.128174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.128337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.128373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.128553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.128587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.128935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.128991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.129184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.129219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.129440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.129491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.129646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.129679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.129904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.129939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.130168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.130204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.130390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.130435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.130614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.130646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.130849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.130886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.131104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.131140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.131330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.131365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.131561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.131594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.131790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.131838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.132070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.132124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.132332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.132395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.132563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.132597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.132811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.132862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.133089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.133141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.133342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.133376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.133568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.133602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.133797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.133835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.134053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.134089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.134307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.134343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.134573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.134607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.134796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.134832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.135035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.135072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.135247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.135284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-07-10 14:39:01.135453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-07-10 14:39:01.135504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.135772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.135807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.136011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.136063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.136237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.136288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.136446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.136483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.136659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.136713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.136921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.136973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.137172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.137223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.137402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.137441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.137624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.137662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.137829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.137872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.138045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.138081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.138254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.138290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.138498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.138531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.138730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.138766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.138926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.138962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.139130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.139166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.139356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.139393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.139614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.139647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.139878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.139914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.140110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.140146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.140332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.140369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.140571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.140604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.140770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.140806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.140975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.141011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.141240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.141276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.141450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.141500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.141678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.141728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.141957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.142012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.142255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.142291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.142505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.142561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.142760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.142796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.142973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.143009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.143235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.143271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.143454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.143496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.143669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.143720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.143903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.143936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.144151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.144187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.144384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.144422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.144600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.144633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.144799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.144835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.145035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.145072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.145235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.145271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.145457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.145490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.145669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.145702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.145894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.145926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.146082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.146115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.146321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.146357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.146562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.146595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.146766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.146798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.146996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.147047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.147216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.147252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.147408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-07-10 14:39:01.147452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-07-10 14:39:01.147628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.147661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.147872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.147908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.148073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.148111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.148311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.148347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.148528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.148562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.148714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.148747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.148930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.148966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.149160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.149195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.149395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.149435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.149593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.149626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.149779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.149812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.150016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.150052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.150274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.150311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.150490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.150523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.150671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.150722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.150952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.150989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.151178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.151214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.151407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.151451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.151611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.151643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.151840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.151896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.152076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.152127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.152319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.152356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.152562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.152595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.152773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.152809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.153057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.153112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.153274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.153310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.153487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.153520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.153728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.153764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.153936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.153969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.154116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.154167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.154389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.154434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.154636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.154668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.154846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.154878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.155048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.155083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.155278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.155314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.155480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.155517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.155697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.155736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.155931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.155972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.156139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.156177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.156355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.156388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.156547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.156580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.156784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.156817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.156977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.157027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.157190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-07-10 14:39:01.157226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-07-10 14:39:01.157399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.157438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.157611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.157649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.157820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.157858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.158084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.158116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.158296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.158328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.158540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.158577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.158752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.158789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.158987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.159023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.159219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.159252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.159481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.159518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.159728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.159760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.159981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.160016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.160184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.160216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.160418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.160460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.160645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.160681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.160910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.160961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.161156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.161190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.161363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.161413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.161610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.161646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.161914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.161968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.162152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.162184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.162382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.162418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.162603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.162639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.162841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.162873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.163053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.163086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.163281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.163317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.163501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.163534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.163693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.163725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.163904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.163937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.164138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.164174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.164399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.164436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.164654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.164689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.164870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.164902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.165043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.165078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.165287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.165323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.165564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.165597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.165756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.165789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.165985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.166022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.166223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.166259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.166433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.166470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.166673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.166705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.166903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.166939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.167101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.167136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.167322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.167357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.167554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.167587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.167742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-07-10 14:39:01.167775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-07-10 14:39:01.167924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.167955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.168165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.168216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.168404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.168442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.168644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.168680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.168871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.168906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.169105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.169141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.169336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.169369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.169548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.169585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.169752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.169788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.169982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.170018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.170189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.170223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.170380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.170433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.170689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.170727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.170930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.170967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.171172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.171222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.171484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.171540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.171736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.171772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.171955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.171990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.172142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.172177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.172361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.172395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.172599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.172646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.172856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.172895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.173167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.173200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.173432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.173480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.173686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.173740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.173978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.174012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.174193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.174240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.174416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.174480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.174636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.174674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.174872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.174910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.175089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.175122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.175292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.175329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.175537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.175571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.175773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.175810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.175985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.176017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.176228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.176284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.176497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.176530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.176681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.176714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.176940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.176975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.177169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.177221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.177456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.177514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.177704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.177755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.177953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.177986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.178155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.178191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.178360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.178395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.178623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.178655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.178807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-07-10 14:39:01.178839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-07-10 14:39:01.179013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.179049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.179245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.179277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.179453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.179486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.179662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.179695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.179898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.179931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.180085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.180117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.180270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.180321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.180523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.180556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.180736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.180770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.180949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.180984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.181161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.181193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.181345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.181382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.181567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.181611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.181786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.181822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.181987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.182023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.182204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.182236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.182398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.182442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.182660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.182692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.182941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.182973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.183148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.183180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.183344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.183384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.183572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.183605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.183759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.183810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.184005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.184037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.184193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.184226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.184423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.184469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.184644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.184676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.184852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.184884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.185078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.185114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.185283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.185318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.185540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.185574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.185721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.185765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.185935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.185971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.186181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.186213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.186374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.186431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.186627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.186660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.186812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.186846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.187044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.187081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.187277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.187313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.187536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.187569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.187769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.187827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.188022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.188060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.188239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.188276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.188504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.188538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.188738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.188799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.188994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.189031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.189300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.189354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.189572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.189606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.189777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.189815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.190011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.190046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.190239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.190295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.190492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.190525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-07-10 14:39:01.190681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-07-10 14:39:01.190731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.190920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.190956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.191128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.191163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.191334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.191367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.191540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.191573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.191751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.191785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.191936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.191987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.192192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.192225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.192431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.192488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.192645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.192677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.192898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.192930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.193099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.193131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.193303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.193339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.193541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.193574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.193723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.193756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.193931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.193963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.194165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.194200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.194364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.194401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.194616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.194649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.194827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.194860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.195009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.195041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.195217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.195249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.195475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.195525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.195698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.195730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.195903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.195939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.196151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.196184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.196382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.196418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.196608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.196641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.196821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.196853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.197067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.197103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.197288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.197324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.197487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.197521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.197672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.197733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.197962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.197995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.198162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.198195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.198385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.198417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.198647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.198715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.198925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.198965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.199193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.199227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.199514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-07-10 14:39:01.199548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-07-10 14:39:01.199742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.199778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.199972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.200013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.200259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.200292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.200479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.200516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.200699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.200733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.200916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.200948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.201100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.201133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.201288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.201322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.201482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.201520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.201719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.201756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.201925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.201962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.202139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.202172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.202364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.202400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.202612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.202645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.202798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.202830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.203012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.203045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.203203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.203235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.203387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.203420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.203653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.203687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.203876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.203909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.204138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.204174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.204336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.204372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.204621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.204656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.204813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.204845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.205110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.205146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.205349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.205382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.205575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.205609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.205816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.205848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.206094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.206129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.206348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.206384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.206573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.206605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.206752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.206785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.206984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.207020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.207251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.207283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.207437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.207470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.207615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.207648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.207825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.207857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.208063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.208098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.208269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.208305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.208540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.208574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.208770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.208807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.208973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.209009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.209177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.209213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.209386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.209419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.209630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.209681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.209932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.209971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.210172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.210228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.210430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.210471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.210631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.210665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.210854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.210890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.211095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.211155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-07-10 14:39:01.211360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-07-10 14:39:01.211396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.211560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.211594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.211762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.211798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.212002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.212058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.212247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.212280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.212440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.212472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.212618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.212652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.212858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.212894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.213074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.213106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.213304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.213339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.213526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.213558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.213753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.213785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.213937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.213969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.214139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.214171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.214352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.214388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.214566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.214599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.214743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.214776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.214930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.214963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.215112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.215144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.215347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.215383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.215574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.215606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.215788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.215866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.216040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.216076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.216258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.216294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.216468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.216505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.216661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.216693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.216881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.216914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.217104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.217136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.217282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.217314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.217490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.217523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.217699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.217731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.217918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.217986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.218146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.218177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.218381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.218416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.218618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.218650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.218803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.218837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.218995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.219027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.219202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.219238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.219444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.219495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.219658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.219691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.219848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.219880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.220035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.220068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.220241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.220275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.220465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.220502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.220700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.220731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.220927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.220962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.221148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.221184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.221351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.221386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.221581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.221613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.221768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.221801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.221996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.222032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.222228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.222264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.222447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.222479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.222640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.222673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-07-10 14:39:01.222878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-07-10 14:39:01.222915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.223113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.223145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.223327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.223359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.223519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.223552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.223704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.223741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.223890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.223941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.224152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.224184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.224381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.224417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.224645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.224677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.224890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.224925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.225128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.225164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.225368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.225404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.225579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.225611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.225783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.225820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.226053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.226085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.226281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.226317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.226510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.226543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.226703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.226736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.226925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.226956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.227155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.227190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.227382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.227417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.227600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.227632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.227815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.227848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.228017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.228052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.228265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.228297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.228479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.228516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.228687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.228726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.228940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.228991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.229162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.229198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.229373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.229409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.229645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.229678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.229910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.229957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.230159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.230196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.230400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.230439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.230637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.230668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.230912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.230944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.231124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.231156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.231395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.231438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.231640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.231681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.231914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.231950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.232139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.232174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.232365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.232401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.232623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.232655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.232887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.232919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.233097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.233129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.233306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.233338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.233551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.233584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-07-10 14:39:01.233762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-07-10 14:39:01.233803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.234013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.234048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.234245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.234281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.234474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.234513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.234691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.234727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.234895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.234930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.235174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.235231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.235456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.235488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.235705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.235738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.235920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.235952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.236185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.236220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.236388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.236442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.236633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.236666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.236875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.236910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.237078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.237114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.237307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.237339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.237533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.237570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.237765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.237800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.237988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.238024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.238203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.238237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.238443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.238476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.238647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.238679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.238852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.238884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.239074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.239107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.239257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.239289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.239470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.239503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.239679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.239729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.239904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.239938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.240100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.240136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.240346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.240378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.240564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.240601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.240812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.240844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.241023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.241055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.241226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.241258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.241434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.241470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.241648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.241680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.241853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.241888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.242107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.242139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.242361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.242396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.242606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.242638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.242794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.242826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.243005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.243037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.243233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.243283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.243570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.243606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-07-10 14:39:01.243816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-07-10 14:39:01.243852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.244044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.244080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.244259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.244290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.244501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.244534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.244745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.244787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.244964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.244996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.245164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.245196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.245367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.245399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.245584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.245616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.245784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.245821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.246067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.246099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.246300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.246332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.246528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.246564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.246808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.246844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.247072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.247117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.247316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.247348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.247544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.247580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.247780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.247812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.247999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.248032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.248224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.248259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.248438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.248483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.248707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.248748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.249000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.249036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.249211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.249244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.249438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.249484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.249707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.249744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.250018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.250080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.250254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.250287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.250472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.250506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.250725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.250761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.250986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.251048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.251288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.251320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.251566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.251599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.251768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.251806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.251992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.252028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.252209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.252241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.252441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.252498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.252698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.252731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.252994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.253050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.253265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.253301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.253499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.253535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.253759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.253791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.253945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.253977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.254130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.254163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.254369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.254405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.254638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.254674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.254844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.254879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.255075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.255107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.255254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.255286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-07-10 14:39:01.255436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-07-10 14:39:01.255498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.255667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.255703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.255935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.255967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.256162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.256195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.256368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.256400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.256564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.256615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.256787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.256819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.257009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.257044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.257250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.257282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.257484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.257535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.257714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.257752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.257945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.257982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.258203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.258236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.258374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.258406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.258620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.258652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.258804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.258836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.259010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.259044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.259220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.259257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.259497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.259529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.259674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.259744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.259940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.259975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.260146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.260181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.260341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.260373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.260592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.260625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.260869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.260904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.261070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.261105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.261288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.261322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.261546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.261584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.261752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.261787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.262021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.262074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.262293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.262329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.262524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.262560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.262730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.262766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.262956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.262991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.263217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.263249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.263399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.263437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.263639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.263689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.263959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.264015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.264245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.264277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.264505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.264541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.264760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.264795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.265048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.265105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.265326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.265358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.265517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.265550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.265747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.265783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.265973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.266008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-07-10 14:39:01.266210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-07-10 14:39:01.266242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.266411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.266451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.266656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.266691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.266921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.266954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.267130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.267162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.267338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.267369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.267598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.267635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.267885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.267941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.268143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.268175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.268364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.268401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.268638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.268670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.268929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.268998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.269234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.269267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.269470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.269508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.269688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.269720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.269916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.269953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.270185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.270218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.270421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.270468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.270677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.270709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.270877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.270928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.271131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.271163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.271367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.271403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.271613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.271649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.271866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.271902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.272138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.272174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.272394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.272435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.272654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.272690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.272902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.272960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.273193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.273224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.273457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.273494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.273692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.273728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.273913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.273949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.274125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.274157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.274325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.274361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.274583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.274619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.274788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.274824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.275026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.275058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.275220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.275267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.275442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.275479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.275661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.275696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.275870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.275904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.276062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.276099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.276327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.276359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-07-10 14:39:01.276550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-07-10 14:39:01.276587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.276785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.276817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.277019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.277055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.277221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.277258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.277438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.277476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.277666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.277699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.277896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.277932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.278149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.278185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.278365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.278401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.278624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.278657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.278880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.278912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.279110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.279142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.279382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.279418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.279589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.279621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.279839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.279875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.280063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.280099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.280292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.280327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.280555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.280588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.280839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.280871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.281071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.281106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.281335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.281367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.281552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.281590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.281764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.281798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.282022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.282058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.282225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.282261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.282471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.282504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.282701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.282736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.282932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.282967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.283157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.283193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.283359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.283391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.283549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.283582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.283805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.283841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.284162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.284228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.284399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.284438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.284663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.284699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.284907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.284939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.285119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.285151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.285335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.285367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.285521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.285554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.285698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.285748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.285981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.286040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.286239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.286271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.286453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.286494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.286722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.286758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.286915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.286952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.287154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.287187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.287417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.287469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.287675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-07-10 14:39:01.287708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-07-10 14:39:01.287927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.287984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.288180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.288212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.288435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.288480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.288701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.288733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.288910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.288942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.289088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.289120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.289313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.289348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.289565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.289598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.289797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.289851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.290054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.290086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.290261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.290308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.290504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.290541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.290763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.290795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.290980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.291016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.291241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.291276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.291449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.291489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.291674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.291734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.291929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.291961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.292159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.292195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.292418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.292461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.292678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.292714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.292893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.292926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.293088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.293120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.293317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.293353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.293517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.293554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.293778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.293810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.294040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.294076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.294285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.294321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.294589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.294647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.294847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.294880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.295053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.295088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.295316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.295351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.295640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.295697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.295916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.295948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.296152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.296188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.296383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.296418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.296642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.296678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.296901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.296934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.297133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.297168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.297383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.297418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.297629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.297666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.297870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.297904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.298080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.298116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.298290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.298325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.298514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.298550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.298715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.298748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.298913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.298949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.299185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.299217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.299392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.299431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.299646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.299678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.299907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.299942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.300165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.300200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-07-10 14:39:01.300405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-07-10 14:39:01.300448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.300660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.300696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.300938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.300970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.301153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.301185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.301340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.301372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.301559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.301592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.301816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.301848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.302055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.302090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.302285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.302320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.302516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.302549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.302762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.302798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.302999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.303035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.303231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.303268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.303496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.303529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.303755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.303791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.304022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.304054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.304252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.304288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.304472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.304505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.304687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.304724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.304918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.304955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.305118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.305153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.305349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.305382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.305606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.305650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.305884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.305920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.306086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.306122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.306303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.306346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.306574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.306611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.306842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.306878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.307115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.307148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.307351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.307383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.307563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.307596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.307749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.307782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.307957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.307989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.308159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.308191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.308367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.308399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.308601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.308639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.308892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.308949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.309156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.309189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.309443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.309484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.309654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.309686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.309919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.309956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.310157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.310193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.310362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.310398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.310612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.310645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.310797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.310830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.311031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.311063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.311248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.311284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.311476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.311512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.311700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.311739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.311908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.311941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.312132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.312167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-07-10 14:39:01.312328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-07-10 14:39:01.312364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.312609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.312641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.312823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.312855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.313035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.313068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.313228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.313261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.313478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.313515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.313708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.313740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.313916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.313952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.314124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.314160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.314327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.314363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.314576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.314609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.314810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.314846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.315068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.315105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.315311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.315347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.315579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.315612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.315852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.315889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.316081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.316119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.316353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.316389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.316634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.316668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.316912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.316970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.317169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.317203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.317377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.317410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.317594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.317627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.317798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.317830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.318043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.318075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.318310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.318345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.318539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.318576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.318853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.318912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.319138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.319171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.319369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.319405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.319574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.319614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.319820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.319856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.320089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.320122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.320331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.320366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.320573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.320611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.320925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.320996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.321226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.321258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.321540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.321577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.321796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.321832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.322049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.322085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.322307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.322339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.322544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.322581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.322749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-07-10 14:39:01.322785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-07-10 14:39:01.322976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.323012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.323222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.323255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.323420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.323469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.323654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.323690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.323986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.324045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.324258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.324290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.324490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.324527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.324724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.324760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.324984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.325016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.325192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.325224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.325406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.325451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.325670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.325702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.326014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.326070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.326289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.326320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.326537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.326570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.326784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.326819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.327070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.327129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.327357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.327390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.327574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.327607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.327783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.327821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.328021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.328054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.328204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.328237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.328415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.328454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.328624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.328660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.328929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.328962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.329117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.329149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.329359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.329409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.329645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.329681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.329855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.329890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.330060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.330093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.330245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.330277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.330482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.330533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.330700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.330736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.330915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.330947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.331128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.331167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.331377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.331414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.331658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.331716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.331934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.331966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.332172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.332204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.332387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.332419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.332628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.332664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.332868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.332900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.333041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.333073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.333218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.333250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.333456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.333499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.333709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.333741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.333937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.333973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.334193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.334229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.334435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.334469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.334647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.334680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.334881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-07-10 14:39:01.334917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-07-10 14:39:01.335088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.335124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.335323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.335359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.335557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.335590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.335807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.335847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.336063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.336099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.336266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.336312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.336538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.336571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.336782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.336814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.336960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.337010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.337214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.337246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.337451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.337484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.337705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.337751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.337967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.338003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.338210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.338246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.338439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.338473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.338700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.338740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.338942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.338979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.339220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.339281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.339472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.339505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.339736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.339768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.339969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.340001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.340199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.340231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.340405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.340457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.340712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.340754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.340990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.341026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.341220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.341255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.341457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.341490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.341689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.341732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.341947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.341982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.342156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.342192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.342368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.342400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.342580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.342611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.342794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.342829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.343081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.343136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.343360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.343392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.343605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.343641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.343868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.343904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.344148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.344180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.344349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.344382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.344541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.344573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.344765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.344801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.344998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.345035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.345262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.345294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.345520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.345556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.345737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.345769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.346077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.346132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.346355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.346386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.346583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.346616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.346851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-07-10 14:39:01.346888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-07-10 14:39:01.347153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.347188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.347385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.347419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.347646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.347681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.347860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.347898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.348120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.348157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.348357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.348389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.348573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.348605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.348806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.348842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.349144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.349204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.349439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.349472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.349650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.349686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.349877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.349909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.350099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.350134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.350338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.350370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.350563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.350596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.350742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.350774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.350947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.350979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.351160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.351192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.351386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.351422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.351616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.351652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.351836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.351882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.352084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.352116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.352314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.352349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.352553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.352586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.352763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.352795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.352951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.352983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.353133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.353165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.353343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.353378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.353562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.353600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.353835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.353868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.354032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.354067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.354231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.354266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.354500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.354533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.354713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.354746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.354938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.354978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.355142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.355179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.355377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.355412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.355601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.355633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.355818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.355855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.356084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.356117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.356308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.356343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.356570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.356603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.356796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.356832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.357025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.357063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.357275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-07-10 14:39:01.357308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-07-10 14:39:01.357519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.357552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.357789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.357825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.358054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.358089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.358314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.358350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.358584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.358617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.358846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.358881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.359082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.359117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.359278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.359314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.359490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.359523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.359677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.359711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.359884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.359916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.360059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.360091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.360296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.360327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.360545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.360577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.360783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.360821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.361015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.361051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.361258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.361295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.361500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.361536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.361733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.361779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.361947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.361982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.362205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.362236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.362473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.362510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.362706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.362742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.363009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.363075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.363244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.363276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.363482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.363520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.363695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.363731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.363983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.364041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.364265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.364297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.364516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.364554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.364790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.364826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.365097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.365132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.365355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.365391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.365630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.365663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.365870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.365902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.366087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.366119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.366268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.366300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.366492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.366528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.366701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.366740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.366957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.367004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.367179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.367212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.367371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.367404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.367598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.367630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.367930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.367962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.368135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.368167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.368369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.368405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.368653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.368685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.368888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.368924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.369120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.369152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.369352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.369385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-07-10 14:39:01.369578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-07-10 14:39:01.369627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.369912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.369971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.370195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.370228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.370434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.370470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.370668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.370704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.370889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.370921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.371131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.371163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.371409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.371455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.371649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.371686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.371884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.371916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.372062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.372094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.372296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.372345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.372566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.372603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.372935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.372996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.373215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.373247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.373393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.373431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.373587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.373618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.373822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.373858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.374020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.374052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.374245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.374285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.374494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.374528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.374704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.374736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.374933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.374965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.375136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.375171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.375369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.375401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.375581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.375615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.375792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.375824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.375971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.376002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.376186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.376222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.376422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.376467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.376627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.376659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.376875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.376910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.377106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.377141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.377371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.377406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.377630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.377662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.377891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.377927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.378116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.378153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.378326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.378362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.378569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.378602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.378825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.378864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.379075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.379107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.379280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.379316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.379513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.379546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.379774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.379810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.379976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.380011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.380171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.380207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.380391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.380423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-07-10 14:39:01.380586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-07-10 14:39:01.380617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.380802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.380834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.380981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.381013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.381200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.381232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.381400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.381452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.381636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.381671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.381929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.381997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.382221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.382253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.382422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.382472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.382666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.382703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.382935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.382970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.383195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.383227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.383439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.383479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.383671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.383706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.383973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.384009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.384172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.384203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.384399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.384444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.384667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.384701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.384978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.385010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.385156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.385188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.385391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.385434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.385638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.385674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.385869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.385907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.386091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.386123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.386320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.386356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.386521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.386558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.386724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.386760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.386939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.386972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.387126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.387176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.387352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.387387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.387570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.387606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.387805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.387838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.388054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.388090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.388250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.388285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.388494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.388530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.388704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.388737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.388916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.388949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.389107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.389160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.389345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.389377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.389575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.389607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.389804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.389840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.390061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.390097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.390270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.390302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.390481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.390513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.390732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.390764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.390919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.390970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.391175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.391207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.391378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.391415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.391610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.391646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.391848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.391880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.392028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-07-10 14:39:01.392060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-07-10 14:39:01.392237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.392270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.392489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.392526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.392730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.392766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.392990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.393022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.393218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.393255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.393439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.393489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.393642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.393674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.393876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.393912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.394141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.394173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.394351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.394389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.394609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.394645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.394826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.394858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.395033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.395065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.395220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.395256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.395450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.395496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.395743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.395775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.395969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.396002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.396183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.396219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.396402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.396447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.396687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.396742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.396942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.396974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.397190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.397250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.397472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.397504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.397651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.397702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.397908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.397940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.398209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.398269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.398494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.398528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.398726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.398762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.398945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.398978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.399132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.399164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.399341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.399375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.399567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.399603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.399791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.399823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.399979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.400012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.400159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.400192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.400368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.400400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.400600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.400633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.400833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.400869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.401040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.401073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.401222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.401271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.401494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.401527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.401680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.401717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.401870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.401901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.402097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.402133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.402342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.402375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.402556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.402591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.402770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.402802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-07-10 14:39:01.402996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-07-10 14:39:01.403031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.403215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.403247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.403451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.403487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.403647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.403683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.403877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.403913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.404087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.404119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.404314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.404350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.404568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.404601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.404829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.404865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.405060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.405093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.405288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.405325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.405558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.405595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.405787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.405824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.405988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.406020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.406220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.406256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.406417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.406471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.406695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.406730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.406896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.406929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.407110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.407173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.407373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.407405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.407570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.407603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.407794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.407826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.407979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.408011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.408166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.408216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.408385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.408421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.408634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.408666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.408820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.408851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.409027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.409059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.409233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.409268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.409455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.409488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.409720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.409752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.409930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.409963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.410176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.410211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.410399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.410447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.410631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.410667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.410856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.410892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.411080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.411127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.411332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.411365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.411552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.411589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.411781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.411816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.412022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.412054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.412207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.412239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.412439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.412486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.412652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.412687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.412848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.412884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.413059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.413091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.413282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.413317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.413514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.413551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.413763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.413796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-07-10 14:39:01.413942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-07-10 14:39:01.413974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.414121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.414153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.414386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.414422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.414640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.414673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.414821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.414853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.415003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.415035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.415210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.415242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.415397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.415456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.415655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.415687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.415858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.415894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.416084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.416122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.416339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.416375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.416579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.416613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.416861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.416893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.417099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.417134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.417298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.417336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.417530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.417563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.417847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.417927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.418169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.418204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.418447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.418513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.418700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.418734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.418882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.418915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.419097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.419130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.419320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.419357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.419565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.419601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.419862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.419937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.420115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.420150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.420309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.420345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.420546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.420579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.420821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.420876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.421044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.421080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.421275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.421311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.421490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.421523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.421675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.421709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.421886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.421922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.422147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.422184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.422385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.422422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.422588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.422621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.422823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.422860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.423154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.423198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.423368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.423401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.423581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.423614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.423794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.423832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.424041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.424078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.424285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.424318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.424538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.424572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.424723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.424755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.425036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.425074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.425272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.425305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.425460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.425492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-07-10 14:39:01.425667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-07-10 14:39:01.425700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-07-10 14:39:01.425870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-07-10 14:39:01.425922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-07-10 14:39:01.426104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-07-10 14:39:01.426146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-07-10 14:39:01.426345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-07-10 14:39:01.426382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-07-10 14:39:01.426590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-07-10 14:39:01.426623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-07-10 14:39:01.426811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-07-10 14:39:01.426848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:52.269 [2024-07-10 14:39:01.427023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.269 [2024-07-10 14:39:01.427058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.269 qpair failed and we were unable to recover it. 00:36:52.269 [2024-07-10 14:39:01.427258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.269 [2024-07-10 14:39:01.427292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.269 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.427478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.427532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.427690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.427723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.427941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.427974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.428180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.428219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.428443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.428495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.428654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.428690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.428895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.428929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.429231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.429296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.429516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.429550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.429754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.429801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.429980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.430017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.430192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.430229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.430399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.430448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.430626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.430660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.430928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.430962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.431197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.431233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.431396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.431447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.431654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.431687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.431887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.431921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.432071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.432107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.432266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.432320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.432526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.432560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.432743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.432775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.433001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.433041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.433206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.433240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.433399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.433441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.433600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.433632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.433803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.433839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.434038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.434073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.434243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.434279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.434460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.434493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.434643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.434675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.434875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.434911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.435141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.435198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.435400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.435441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.435602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.435634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.435810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.435846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.270 [2024-07-10 14:39:01.436098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.270 [2024-07-10 14:39:01.436155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.270 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.436363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.436395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.436554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.436586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.436733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.436765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.436921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.436953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.437099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.437131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.437349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.437385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.437622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.437654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.437866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.437897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.438044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.438076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.438228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.438264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.438415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.438480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.438651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.438694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.438900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.438933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.439083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.439115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.439264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.439298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.439533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.439565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.439739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.439771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.439930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.439969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.440170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.440206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.440408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.440451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.440622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.440655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.440808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.440840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.441038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.441073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.441236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.441273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.441458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.441496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.441650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.441683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.441827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.441859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.442032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.442064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.442266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.442299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.442515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.442548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.442699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.442731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.442914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.442950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.443149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.443180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.443367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.443403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.443606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.443644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.443884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.443939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.444120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.444152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.444363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.444399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.444622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.444655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.271 [2024-07-10 14:39:01.444816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.271 [2024-07-10 14:39:01.444848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.271 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.445061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.445093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.445296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.445331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.445508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.445545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.445749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.445781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.445935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.445968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.446134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.446169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.446329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.446364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.446559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.446595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.446792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.446825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.446970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.447006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.447218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.447269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.447437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.447474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.447676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.447708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.447848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.447898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.448134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.448183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.448376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.448412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.448619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.448652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.448877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.448913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.449105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.449141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.449363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.449399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.449612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.449644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.449876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.449911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.450102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.450138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.450305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.450340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.450556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.450589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.450805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.450837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.451029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.451066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.451255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.451290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.451470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.451502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.451702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.451737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.451931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.451967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.452187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.452223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.452386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.452418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.452578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.452611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.452818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.452850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.453029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.453125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.453359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.453396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.453579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.453611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.453791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.453838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.272 [2024-07-10 14:39:01.454066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.272 [2024-07-10 14:39:01.454099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.272 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.454308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.454340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.454553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.454589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.454772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.454807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.455073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.455129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.455325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.455357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.455548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.455584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.455784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.455816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.456018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.456054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.456258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.456290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.456464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.456501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.456712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.456745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.456922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.456954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.457209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.457241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.457459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.457492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.457645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.457677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.457865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.457901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.458146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.458178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.458380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.458416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.458611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.458643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.458900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.458955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.459130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.459162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.459312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.459344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.459552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.459602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.459830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.459889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.460086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.460118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.460281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.460316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.460517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.460554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.460715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.460752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.460949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.460981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.461182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.461217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.461398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.461447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.461627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.461659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.461876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.461908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.462096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.462128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.462330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.462366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.462571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.462608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.462790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.462826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.463020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.273 [2024-07-10 14:39:01.463055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.273 qpair failed and we were unable to recover it. 00:36:52.273 [2024-07-10 14:39:01.463246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.463281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.463448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.463484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.463663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.463695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.463854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.463885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.464037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.464069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.464224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.464257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.464438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.464481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.464647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.464683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.464898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.464930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.465074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.465106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.465280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.465312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.465470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.465503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.465705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.465741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.465917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.465951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.466131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.466164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.466360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.466397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.466606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.466638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.466863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.466895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.467041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.467073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.467251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.467283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.467468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.467502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.467708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.467779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.467984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.468016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.468203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.468239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.468433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.468476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.468669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.468705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.468879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.468911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.469066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.469097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.469290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.469327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.469522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.469558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.469729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.274 [2024-07-10 14:39:01.469761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.274 qpair failed and we were unable to recover it. 00:36:52.274 [2024-07-10 14:39:01.469958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.469995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.470201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.470233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.470380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.470414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.470598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.470632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.470837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.470874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.471098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.471134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.471295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.471330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.471497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.471536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.471728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.471761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.471936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.471970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.472189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.472222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.472435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.472486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.472635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.472668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.472868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.472903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.473093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.473128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.473298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.473330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.473559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.473595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.473783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.473818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.474127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.474190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.474390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.474422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.474680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.474732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.474932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.474969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.475170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.475233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.475443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.475478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.475658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.475698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.475899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.475937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.476141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.476192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.476419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.476465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.476679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.476718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.476949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.476987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.477215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.477247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.477403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.477443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.477608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.477641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.477845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.477878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.478042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.478074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.478282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.478314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.478497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.478530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.478707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.478739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.478896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.275 [2024-07-10 14:39:01.478928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.275 qpair failed and we were unable to recover it. 00:36:52.275 [2024-07-10 14:39:01.479118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.479150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.479301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.479333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.479474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.479510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.479652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.479684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.479836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.479867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.480070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.480102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.480271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.480303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.480481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.480513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.480712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.480751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.480914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.480946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.481115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.481147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.481335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.481368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.481542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.481575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.481728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.481760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.481915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.481947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.482121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.482164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.482347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.482378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.482590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.482623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.482768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.482802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.483001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.483033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.483209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.483243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.483470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.483513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.483691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.483723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.483881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.483912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.484114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.484146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.484295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.484329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.484507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.484540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.484735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.484767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.484907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.484939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.485086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.485119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.485268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.485301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.485466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.485506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.485659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.485691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.485878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.485911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.486060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.486092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.486272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.486305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.486480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.486513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.486672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.486704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.486907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.486939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.487135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.276 [2024-07-10 14:39:01.487166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.276 qpair failed and we were unable to recover it. 00:36:52.276 [2024-07-10 14:39:01.487344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.487376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.487586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.487619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.487799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.487832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.488009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.488043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.488190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.488222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.488403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.488444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.488651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.488683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.488864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.488896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.489093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.489129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.489314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.489346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.489520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.489553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.489704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.489736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.489913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.489945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.490102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.490134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.490318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.490355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.490566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.490600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.490776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.490808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.490982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.491014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.491188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.491220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.491407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.491450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.491644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.491677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.491839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.491871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.492047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.492079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.492256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.492289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.492464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.492502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.492688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.492720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.492872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.492905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.493081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.493112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.493260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.493292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.493441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.493475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.493650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.493682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.493827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.493859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.494058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.494090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.494297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.494329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.494531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.494563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.494750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.494783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.494959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.494993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.495173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.277 [2024-07-10 14:39:01.495206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.277 qpair failed and we were unable to recover it. 00:36:52.277 [2024-07-10 14:39:01.495382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.495414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.495590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.495632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.495840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.495873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.496055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.496087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.496284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.496315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.496465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.496502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.496675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.496707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.496886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.496918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.497064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.497096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.497302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.497351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.497519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.497556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.497711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.497744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.497889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.497921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.498067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.498099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.498300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.498333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.498537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.498570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.498768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.498801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.498976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.499009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.499187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.499219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.499406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.499447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.499654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.499686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.499873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.499905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.500076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.500109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.500309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.500341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.500533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.500566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.500741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.500774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.500960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.500991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.501207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.501240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.501416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.501456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.501614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.501646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.501797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.501829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.502028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.502061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.502236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.502268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.502417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.278 [2024-07-10 14:39:01.502457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.278 qpair failed and we were unable to recover it. 00:36:52.278 [2024-07-10 14:39:01.502672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.502704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.502917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.502949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.503097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.503129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.503279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.503312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.503516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.503549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.503722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.503755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.503940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.503972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.504126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.504158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.504308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.504340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.504554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.504587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.504763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.504795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.504943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.504976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.505128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.505160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.505317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.505351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.505507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.505542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.505717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.505750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.505914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.505950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.506130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.506163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.506303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.506335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.506512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.506545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.506753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.506785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.506966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.506999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.507155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.507188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.507367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.507400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.507586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.507619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.507768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.507800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.507970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.508002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.508219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.508251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.508408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.508447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.508610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.508644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.508807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.508839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.509013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.509055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.509230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.509262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.509466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.509500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.509675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.509707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.509891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.509923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.510102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.510134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.510345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.510377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.510557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.279 [2024-07-10 14:39:01.510590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.279 qpair failed and we were unable to recover it. 00:36:52.279 [2024-07-10 14:39:01.510767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.510799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.510976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.511008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.511185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.511217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.511365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.511396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.511627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.511659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.511841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.511873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.512079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.512111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.512283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.512314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.512498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.512533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.512707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.512742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.512928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.512961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.513141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.513173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.513345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.513377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.513586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.513618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.513775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.513809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.513989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.514021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.514195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.514228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.514398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.514445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.514607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.514639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.514816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.514848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.515000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.515033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.515180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.515212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.515388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.515437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.515633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.515665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.515821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.515854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.516002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.516034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.516237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.516269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.516444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.516489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.516690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.516723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.516923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.516955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.517131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.517163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.517386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.517419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.517616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.517650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.517827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.517859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.518012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.518044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.518244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.518276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.518481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.518514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.518693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.518725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.280 qpair failed and we were unable to recover it. 00:36:52.280 [2024-07-10 14:39:01.518871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.280 [2024-07-10 14:39:01.518903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.519106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.519138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.519318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.519350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.519505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.519539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.519697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.519729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.519902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.519934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.520089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.520123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.520327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.520359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.520544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.520577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.520754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.520786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.520986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.521017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.521192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.521224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.521404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.521445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.521624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.521656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.521855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.521887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.522042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.522074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.522259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.522291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.522470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.522502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.522682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.522725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.522932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.522968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.523147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.523178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.523359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.523391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.523556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.523589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.523761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.523793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.523950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.523983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.524187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.524218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.524447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.524499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.524645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.524678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.524832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.524864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.525013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.525047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.525250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.525282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.525422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.525461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.525612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.525644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.525852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.525885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.526057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.526089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.526286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.526318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.526562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.526595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.526776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.526808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.281 [2024-07-10 14:39:01.526968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.281 [2024-07-10 14:39:01.527000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.281 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.527205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.527237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.527440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.527477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.527650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.527682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.527889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.527922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.528116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.528148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.528325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.528357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.528535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.528568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.528748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.528781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.528991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.529023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.529208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.529241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.529421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.529460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.529639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.529671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.529839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.529872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.530045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.530077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.530259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.530291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.530465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.530498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.530644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.530678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.530872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.530905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.531084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.531117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.531296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.531328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.531485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.531523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.531706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.531739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.531915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.531947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.532122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.532155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.532331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.532363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.532577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.532609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.532785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.532817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.532995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.533027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.533229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.533261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.533469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.533502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.533655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.533687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.533835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.533867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.534011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.534043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.534255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.534287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.534478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.534512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.534689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.534722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.534872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.534904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.535058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.535090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.282 qpair failed and we were unable to recover it. 00:36:52.282 [2024-07-10 14:39:01.535231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.282 [2024-07-10 14:39:01.535263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.535468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.535521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.535699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.535731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.535906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.535938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.536118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.536150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.536301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.536343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.536548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.536581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.536763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.536795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.536941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.536973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.537119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.537151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.537370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.537402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.537571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.537603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.537782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.537816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.537973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.538007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.538220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.538253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.538438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.538471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.538625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.538658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.538804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.538837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.538977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.539010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.539188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.539220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.539398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.539441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.539656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.539688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.539843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.539879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.540065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.540098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.540314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.540350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.540530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.540563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.540752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.540784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.540935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.540967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.541168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.541200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.541371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.541403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.541565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.541597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.541784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.283 [2024-07-10 14:39:01.541817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.283 qpair failed and we were unable to recover it. 00:36:52.283 [2024-07-10 14:39:01.541961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.541993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.542171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.542205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.542416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.542456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.542612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.542644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.542828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.542862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.543069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.543101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.543276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.543308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.543457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.543491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.543640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.543672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.543842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.543875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.544047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.544079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.544235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.544267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.544472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.544512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.544692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.544725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.544873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.544906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.545088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.545121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.545298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.545330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.545508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.545541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.545718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.545750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.545928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.545961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.546143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.546175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.546320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.546352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.546531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.546564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.546747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.546779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.546984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.547016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.547221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.547254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.547528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.547561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.547775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.547808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.547985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.548017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.548167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.548201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.548370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.548407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.548598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.548630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.548803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.548835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.549011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.549044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.549225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.549257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.549487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.549520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.549704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.549737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.549912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.549954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.550163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.550196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.284 qpair failed and we were unable to recover it. 00:36:52.284 [2024-07-10 14:39:01.550352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.284 [2024-07-10 14:39:01.550384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.550577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.550610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.550799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.550831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.551004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.551036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.551196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.551230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.551411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.551452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.551639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.551672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.551820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.551852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.552064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.552097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.552277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.552309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.552510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.552543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.552725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.552757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.552912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.552944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.553151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.553184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.553359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.553391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.553547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.553579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.553772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.553805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.554005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.554037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.554200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.554234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.554473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.554506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.554684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.554716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.554917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.554949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.555160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.555192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.555346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.555379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.555558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.555591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.555767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.555799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.555998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.556030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.556208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.556240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.556451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.556489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.556664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.556697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.556903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.556935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.557081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.557116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.557320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.557352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.557519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.557552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.557757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.557788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.285 [2024-07-10 14:39:01.557962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.285 [2024-07-10 14:39:01.557994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.285 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.558169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.558201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.558372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.558405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.558583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.558616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.558797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.558830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.559013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.559045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.559213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.559245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.559382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.559414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.559575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.559607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.559782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.559815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.559998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.560031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.560207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.560239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.560393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.560434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.560614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.560646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.560822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.560854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.561011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.561043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.561221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.561254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.561439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.561481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.561652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.561685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.561833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.561865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.562067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.562100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.562276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.562308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.562495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.562529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.562694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.562731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.562905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.562938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.563113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.563145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.563320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.563352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.563541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.563583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.563740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.563774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.563982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.564015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.564190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.564222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.564398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.564436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.564617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.564649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.564841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.564873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.565016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.565048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.565253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.565285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.565466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.565499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.565676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.565709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.286 [2024-07-10 14:39:01.565853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.286 [2024-07-10 14:39:01.565886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.286 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.566089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.566121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.566359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.566394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.566626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.566676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.566875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.566912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.567121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.567172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.567440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.567481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.567739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.567773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.567982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.568033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.568245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.568296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.568456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.568490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.568645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.568678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.568837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.568869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.569015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.569047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.569249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.569286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.569486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.569519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.569665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.569698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.569876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.569912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.570107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.570143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.570318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.570354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.570561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.570594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.570776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.570811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.571003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.571040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.571263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.571300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.571508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.571541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.571720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.571757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.571936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.571987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.572184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.572220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.572411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.572473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.572618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.572651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.572864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.572900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.573161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.573196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.573364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.573401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.573569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.287 [2024-07-10 14:39:01.573601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.287 qpair failed and we were unable to recover it. 00:36:52.287 [2024-07-10 14:39:01.573824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.573859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.574110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.574145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.574321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.574362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.574537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.574570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.574748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.574785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.575014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.575047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.575251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.575287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.575495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.575528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.575748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.575783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.576041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.576096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.576293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.576328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.576562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.576595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.576839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.576874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.577199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.577255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.577451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.577500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.577672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.577725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.577950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.577985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.578225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.578274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.578490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.578523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.578742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.578786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.579107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.579174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.579348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.579382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.579576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.579609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.579768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.579801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.580079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.580144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.580390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.580435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.580641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.580673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.580884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.580932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.581186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.581240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.581457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.581490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.581717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.581753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.581974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.582014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.582181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.288 [2024-07-10 14:39:01.582217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.288 qpair failed and we were unable to recover it. 00:36:52.288 [2024-07-10 14:39:01.582410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.582449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.582633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.582665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.582822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.582854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.583008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.583057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.583252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.583288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.583445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.583495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.583676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.583726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.583880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.583915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.584132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.584168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.584365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.584401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.584616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.584649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.584900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.584976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.585245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.585301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.585516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.585551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.585725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.585757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.585943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.586010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.586232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.586268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.586460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.586492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.586726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.586762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.587059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.587094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.587314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.587349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.587568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.587601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.587758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.587790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.587989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.588024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.588206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.588238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.588443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.588494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.588723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.588758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.589078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.589138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.589313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.589346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.589522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.589554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.589748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.589784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.590032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.590088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.590313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.590353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.590559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.590596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.590798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.590834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.591061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.591093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.591273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.591305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.591452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.591488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.591707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.591747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.289 [2024-07-10 14:39:01.592050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.289 [2024-07-10 14:39:01.592112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.289 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.592306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.592338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.592542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.592579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.592774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.592810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.592978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.593013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.593184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.593227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.593435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.593481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.593675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.593717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.593923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.593955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.594125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.594157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.594383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.594418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.594602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.594638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.594907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.594963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.595199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.595231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.595476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.595509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.595723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.595772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.596071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.596106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.596306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.596339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.596542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.596578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.596769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.596816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.597035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.597070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.597235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.597269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.597442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.597482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.597684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.597719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.598019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.598081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.598303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.598335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.598515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.598551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.598775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.598810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.599075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.599145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.599368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.599400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.599645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.599677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.599860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.599892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.600084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.600120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.600317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.290 [2024-07-10 14:39:01.600352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.290 qpair failed and we were unable to recover it. 00:36:52.290 [2024-07-10 14:39:01.600533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.600565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.600740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.600776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.600940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.600976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.601191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.601224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.601454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.601492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.601691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.601737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.601934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.601969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.602196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.602228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.602431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.602473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.602671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.602703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.602929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.602965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.603172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.603204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.603404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.603455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.603693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.603725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.603902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.603934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.604108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.604140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.604355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.604387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.604595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.604628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.604827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.604862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.605069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.605103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.605341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.605376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.605595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.605631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.605814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.605849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.606024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.606061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.606269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.606304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.606500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.606537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.606718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.606754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.606982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.607013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.607207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.607243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.607454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.607491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.607688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.607737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.607964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.607997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.608205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.608240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.291 [2024-07-10 14:39:01.608436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.291 [2024-07-10 14:39:01.608476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.291 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.608687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.608723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.608877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.608909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.609104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.609139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.609300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.609336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.609554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.609586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.609774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.609806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.609997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.610029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.610205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.610237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.610436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.610483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.610687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.610723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.610911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.610943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.611134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.611174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.611418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.611488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.611692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.611724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.611928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.611964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.612128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.612175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.612405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.612445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.612624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.612656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.612834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.612870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.613072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.613104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.613284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.613316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.613472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.613505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.613701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.613736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.613900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.613936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.614122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.614179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.614380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.614412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.614612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.614648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.614843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.614879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.615188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.615257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.615471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.615506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.615706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.615749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.615952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.615984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.616178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.616213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.616412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.292 [2024-07-10 14:39:01.616452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.292 qpair failed and we were unable to recover it. 00:36:52.292 [2024-07-10 14:39:01.616668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.616704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.616882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.616914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.617072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.617104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.617317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.617349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.617554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.617590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.617798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.617831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.618005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.618037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.618210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.618241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.618479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.618511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.618694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.618744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.618966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.619002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.619206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.619238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.619467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.619504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.619710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.619752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.619950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.619986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.620186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.620218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.620391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.620423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.620660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.620701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.620877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.620912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.621111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.621143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.621345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.621382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.621593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.621630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.621858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.621890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.622071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.622103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.622279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.622315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.622538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.622574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.622757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.622824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.623026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.623059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.623227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.623262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.623469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.623502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.623683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.623716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.623924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.623956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.624183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.624219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.624416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.624473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.624717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.624774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.624960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.624993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.625140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.625190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.625393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.625433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.625660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.625692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.625892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.625924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-07-10 14:39:01.626146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-07-10 14:39:01.626181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.626339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.626374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.626596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.626629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.626832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.626864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.627042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.627077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.627271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.627319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.627524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.627558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.627737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.627770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.627966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.628001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.628166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.628202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.628402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.628455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.628635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.628668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.628887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.628922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.629124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.629156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.629379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.629414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.629639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.629671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.629862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.629898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.630124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.630164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.630360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.630396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.630606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.630638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.630796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.630849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.631042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.631077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.631276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.631312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.631489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.631522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.631682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.631714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.631935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.631971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.632259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.632291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.632491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.632524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.632702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.632738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.632929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.632965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.633188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.633220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.633434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.633475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.633687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.633720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.633892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.633925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.634064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.634096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.634310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.634346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.634546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.634579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.634754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.634790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.634987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.635023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.635194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.635226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-07-10 14:39:01.635405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-07-10 14:39:01.635447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.635619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.635657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.635894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.635959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.636163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.636220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.636447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.636488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.636675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.636711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.636872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.636908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.637109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.637142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.637320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.637355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.637527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.637563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.637743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.637776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.637927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.637959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.638162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.638194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.638345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.638376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.638561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.638597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.638758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.638790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.639018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.639077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.639261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.639305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.639488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.639531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.639736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.639770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.639934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.639970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.640145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.640181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.640357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.640393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.640599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.640632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.640976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.641038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.641250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.641283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.641467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.641519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.641679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.641713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.641872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.641907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.642094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.642130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.642318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.642354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.642531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.642564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.642742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.642793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.643018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.643054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.643337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.643394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.643611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.643644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.643946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.644004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.644207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.644243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.644472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.644506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.644659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.644692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.644970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-07-10 14:39:01.645008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-07-10 14:39:01.645186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.645225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.645475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.645512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.645669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.645703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.645865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.645898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.646078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.646125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.646328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.646361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.646527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.646559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.646766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.646802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.646998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.647034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.647309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.647363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.647581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.647613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.647784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.647820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.648019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.648055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.648223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.648259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.648440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.648483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.648663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.648696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.648913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.648949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.649104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.649156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.649336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.649368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.649521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.649555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.649724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.649761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.649933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.649969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.650167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.650200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.650352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.650384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.650561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.650593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.650788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.650823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.651005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.651037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.651214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.651246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.651459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.651497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.651669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.651718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.651905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.651937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.652114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.652146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.652296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.652328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.652518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.652551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.652700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.652732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.652965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.653000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.653179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.653212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.653358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.653407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.653589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.653622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.653833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.653869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-07-10 14:39:01.654044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-07-10 14:39:01.654080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.654265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.654297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.654480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.654513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.654673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.654725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.654892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.654928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.655118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.655153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.655349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.655381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.655567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.655602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.655757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.655790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.655987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.656022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.656195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.656227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.656384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.656417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.656600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.656637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.656918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.656953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.657125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.657157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.657315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.657347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.657492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.657531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.657745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.657778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.657954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.657986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.658186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.658221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.658438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.658471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.658616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.658648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.658795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.658828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.659000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.659037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.659233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.659270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.659477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.659514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.659697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.659729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.659905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.659937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.660114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.660157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.660399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.660446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.660633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.660665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.660860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.660896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.661089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-07-10 14:39:01.661125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-07-10 14:39:01.661292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.661327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.661532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.661565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.661712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.661744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.661924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.661960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.662121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.662157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.662384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.662416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.662629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.662661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.662806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.662854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.663053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.663088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.663274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.663306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.663512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.663548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.663738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.663773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.664043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.664075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.664285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.664317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.664498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.664534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.664729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.664765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.664968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.664999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.665180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.665212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.665415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.665459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.665641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.665673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.665878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.665914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.666156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.666188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.666390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.666433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.666610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.666649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.666902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.666934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.667116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.667148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.667303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.667335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.667546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.667578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.667790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.667822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.667996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.668028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.668206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.668238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.668437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.668474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.668645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.668681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.668883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.668915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.669073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.669107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.669278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.669315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.669507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.669543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.669761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.669794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.669957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.669992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.670185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.670222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-07-10 14:39:01.670418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-07-10 14:39:01.670462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.670667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.670700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.670858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.670894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.671063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.671099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.671279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.671317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.671550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.671583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.671784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.671819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.671982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.672017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.672179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.672214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.672407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.672447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.672651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.672687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.672879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.672915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.673111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.673146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.673314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.673346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.673541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.673577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.673771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.673806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.674021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.674054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.674233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.674266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.674440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.674477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.674675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.674717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.674922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.674972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.675201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.675233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.675463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.675500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.675673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.675713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.675980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.676036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.676258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.676290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.676468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.676500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.676646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.676679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.676902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.676958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.677159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.677192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.677371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.677403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.677564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.677596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.677851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.677927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.678125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.678156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.678321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.678357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.678555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.678599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.678759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.678794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.679034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.679067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.679269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.679306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.679509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-07-10 14:39:01.679546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-07-10 14:39:01.679710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.679747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.679973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.680005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.680210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.680245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.680455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.680489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.680637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.680670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.680862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.680894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.681096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.681132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.681295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.681332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.681608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.681666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.681866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.681898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.682105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.682137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.682335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.682371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.682554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.682592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.682803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.682836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.683016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.683048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.683247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.683279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.683477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.683515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.683710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.683742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.683908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.683941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.684138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.684170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.684325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.684357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.684542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.684574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.684749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.684781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.684945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.684980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.685147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.685183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.685356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.685389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.685572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.685605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.685824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.685857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.686002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.686051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.686253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.686285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.686461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.686496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.686699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.686737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.686917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.686950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.687126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.687158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.687359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.687395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.687590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.687626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.687827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.687858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.688037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.688069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.688280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.688312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.688516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.688553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-07-10 14:39:01.688723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-07-10 14:39:01.688759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.688959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.688991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.689207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.689243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.689450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.689523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.689728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.689764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.689926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.689959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.690169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.690205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.690366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.690401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.690598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.690634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.690865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.690897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.691101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.691141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.691317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.691350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.691530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.691562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.691751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.691783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.692003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.692035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.692253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.692288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.692489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.692522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.692701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.692734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.692937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.692974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.693165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.693201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.693397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.693442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.693626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.693659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.693836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.693868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.694079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.694115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.694319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.694356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.694544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.694577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.694776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.694808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.695006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.695041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.695246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.695282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.695478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.695511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.695680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.695713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.695906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.695941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.696124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.696157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.696357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.696389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.696580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.696613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-07-10 14:39:01.696790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-07-10 14:39:01.696823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.697146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.697217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.697423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.697469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.697649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.697681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.697888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.697924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.698133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.698165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.698367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.698400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.698612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.698648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.698845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.698878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.699060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.699092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.699361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.699396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.699627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.699674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.699862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.699901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.700169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.700226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.700406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.700446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.700634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.700677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.700889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.700925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.701207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.701243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.701443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.701476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.701678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.701727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.701940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.701972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.702154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.702187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.702392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.702430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.702606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.702639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.702837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.702873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.703051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.703087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.703261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.703294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.703495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.703528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.703721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.703758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.703999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.704033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.704239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.704282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.704507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.704541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.704825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.704862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.705065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.705101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.705329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.705361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.705580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.705614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.705840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.705876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.706207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.706244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.706473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.706506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.706666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.706699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-07-10 14:39:01.706899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-07-10 14:39:01.706932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.707129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.707164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.707345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.707378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.707589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.707622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.707795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.707832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.708058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.708095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.708317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.708350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.708562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.708596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.708756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.708793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.709006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.709039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.709308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.709341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.709562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.709595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.709764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.709800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.710040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.710073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.710334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.710367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.710594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.710631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.710785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.710818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.710983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.711015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.711213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.711245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.711454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.711521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.711711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.711763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.711960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.711997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.712200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.712233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.712385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.712417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.712614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.712647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.712816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.712855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.713081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.713113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.713263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.713295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.713473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.713506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.713689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.713740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.713906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.713938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.714119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.714151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.714317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.714353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.714554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.714587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.714742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.714775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.714970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.715007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.715174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.715209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.715405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.715449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.715654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.715686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.715886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.715918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.716139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-07-10 14:39:01.716188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-07-10 14:39:01.716411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.716481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.716671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.716703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.716930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.716966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.717160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.717196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.717387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.717423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.717662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.717694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.717897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.717933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.718129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.718165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.718330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.718365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.718562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.718595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.718765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.718802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.719038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.719070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.719266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.719302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.719536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.719568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.719740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.719781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.719973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.720009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.720371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.720435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.720652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.720685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.720861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.720894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.721108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.721143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.721341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.721377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.721579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.721612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.721805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.721840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.722034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.722070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.722263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.722298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.722508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.722541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.722739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.722775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.722934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.722969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.723141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.723177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.723400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.723447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.723630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.723662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.723877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.723914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.724118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.724150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.724325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.724357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.724575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.724608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.724849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.724881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.725054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.725086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.725296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.725332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.725556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.725589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.725761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.725798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-07-10 14:39:01.726017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-07-10 14:39:01.726073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-07-10 14:39:01.726280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-07-10 14:39:01.726312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-07-10 14:39:01.726531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-07-10 14:39:01.726563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-07-10 14:39:01.726786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-07-10 14:39:01.726821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-07-10 14:39:01.727167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-07-10 14:39:01.727237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-07-10 14:39:01.727499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-07-10 14:39:01.727547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-07-10 14:39:01.727732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-07-10 14:39:01.727766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-07-10 14:39:01.727953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-07-10 14:39:01.727988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-07-10 14:39:01.728172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-07-10 14:39:01.728208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-07-10 14:39:01.728419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-07-10 14:39:01.728459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-07-10 14:39:01.728665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-07-10 14:39:01.728701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-07-10 14:39:01.728931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-07-10 14:39:01.728963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-07-10 14:39:01.729146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-07-10 14:39:01.729184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-07-10 14:39:01.729376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-07-10 14:39:01.729416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-07-10 14:39:01.729608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-07-10 14:39:01.729645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-07-10 14:39:01.729804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-07-10 14:39:01.729847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-07-10 14:39:01.730027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-07-10 14:39:01.730063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-07-10 14:39:01.730262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-07-10 14:39:01.730294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-07-10 14:39:01.730515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-07-10 14:39:01.730562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-07-10 14:39:01.730770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-07-10 14:39:01.730804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-07-10 14:39:01.730952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-07-10 14:39:01.730984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.731154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.731186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.731385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.731420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.731581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.731624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.731808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.731841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.731990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.732021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.732197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.732231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.732413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.732458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.732669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.732701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.732866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.732898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.733079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.733120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.733271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.733304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.733490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.733536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.733691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.733725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.733898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.733936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.734168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.734206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.734398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.734447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.734634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.734667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.734828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.734863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.735051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.735100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.735299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.735336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.735524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.735558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.735756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.735792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.735988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.736023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.736216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.736252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.736475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.736507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.736675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.736710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.736903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.736938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.737208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.737263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.737472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.737510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.737685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.737724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.737944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.737980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.738177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.738215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.572 [2024-07-10 14:39:01.738448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.572 [2024-07-10 14:39:01.738488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.572 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.738698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.738748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.738976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.739011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.739233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.739269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.739471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.739505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.739703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.739748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.739938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.739974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.740172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.740214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.740413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.740461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.740709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.740752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.740960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.740998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.741182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.741218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.741420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.741459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.741668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.741711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.741945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.741981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.742306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.742380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.742598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.742631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.742837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.742872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.743062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.743098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.743254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.743291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.743500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.743533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.743742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.743778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.743974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.744009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.744211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.744246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.744447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.744488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.744627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.744658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.744809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.744841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.745000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.745033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.745248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.745281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.745493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.745526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.745687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.745738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.745915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.745949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.746122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.746155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.746370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.746402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.746620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.746667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.746855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.746891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.747062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.747095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.747282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.747314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.747537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.747571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.747775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.573 [2024-07-10 14:39:01.747808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.573 qpair failed and we were unable to recover it. 00:36:52.573 [2024-07-10 14:39:01.747984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.748016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.748215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.748255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.748452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.748489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.748649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.748684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.748914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.748946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.749155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.749187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.749380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.749416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.749631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.749663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.749824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.749856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.750007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.750039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.750238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.750273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.750439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.750486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.750690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.750722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.750901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.750937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.751127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.751163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.751387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.751423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.751668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.751700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.751911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.751946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.752174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.752206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.752405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.752450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.752659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.752691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.752901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.752934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.753109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.753141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.753310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.753342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.753570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.753603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.753799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.753835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.754056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.754091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.754257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.754293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.754496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.754529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.754759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.754795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.755016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.755052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.755255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.755286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.755462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.755496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.755713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.755754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.755953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.755989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.756199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.756230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.756403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.756451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.756627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.756662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.756866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.756902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.574 [2024-07-10 14:39:01.757063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.574 [2024-07-10 14:39:01.757100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.574 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.757287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.757320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.757516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.757557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.757779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.757815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.758085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.758146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.758358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.758390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.758567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.758600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.758762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.758798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.758991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.759026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.759224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.759257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.759416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.759461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.759662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.759698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.759975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.760007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.760194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.760226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.760451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.760495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.760733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.760769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.760993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.761029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.761228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.761260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.761442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.761484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.761678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.761726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.762019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.762051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.762238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.762270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.762477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.762513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.762683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.762718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.763023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.763080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.763279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.763312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.763513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.763549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.763773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.763810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.764070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.764107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.764335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.764367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.764589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.764626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.764846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.764881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.765186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.765245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.765439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.765472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.765646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.765678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.765853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.765906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.766130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.766187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.766479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.766512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.766691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.766723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.766959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.575 [2024-07-10 14:39:01.766995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.575 qpair failed and we were unable to recover it. 00:36:52.575 [2024-07-10 14:39:01.767268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.767335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.767556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.767589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.767760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.767800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.767963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.767999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.768164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.768200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.768383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.768415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.768608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.768640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.768824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.768856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.769062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.769110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.769315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.769347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.769578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.769614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.769816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.769848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.770034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.770066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.770247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.770278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.770504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.770541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.770736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.770772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.771066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.771107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.771286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.771318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.771526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.771559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.771759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.771791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.771989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.772025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.772202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.772234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.772418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.772476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.772682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.772717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.772962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.772993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.773161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.773194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.773351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.773387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.773567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.773599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.773750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.773782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.773987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.774036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.774326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.774379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.774557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.774593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.774802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.774836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.576 [2024-07-10 14:39:01.775021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.576 [2024-07-10 14:39:01.775054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.576 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.775243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.775277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.775422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.775462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.775720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.775753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.775981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.776034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.776214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.776267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.776506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.776562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.776967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.777018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.777270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.777304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.777485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.777525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.777700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.777734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.777933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.777985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.778213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.778263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.778504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.778556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.778750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.778798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.778993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.779043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.779225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.779258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.779458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.779510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.779744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.779796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.780043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.780077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.780249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.780282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.780486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.780545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.780746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.780797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.781120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.781186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.781388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.781422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.781678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.781746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.781915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.781966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.782254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.782310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.782507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.782559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.782764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.782815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.783024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.783074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.783228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.783262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.783478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.783528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.783779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.783831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.784030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.784081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.784272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.784304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.784510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.784562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.784818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.784869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.785069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.785131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.785337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.785370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.577 [2024-07-10 14:39:01.785559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.577 [2024-07-10 14:39:01.785614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.577 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.785844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.785895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.786114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.786149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.786330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.786363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.786583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.786633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.786867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.786919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.787090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.787143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.787349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.787384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.787591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.787642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.787914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.787973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.788202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.788255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.788449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.788482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.788694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.788748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.788983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.789036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.789268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.789319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.789508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.789543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.789751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.789803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.790008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.790059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.790247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.790281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.790433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.790466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.790644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.790678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.790862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.790896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.791094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.791127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.791308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.791341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.791525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.791559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.791747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.791781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.791960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.791993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.792174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.792206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.792382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.792415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.792634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.792685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.792858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.792908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.793112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.793162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.793319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.793352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.793558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.793609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.793782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.793835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.794113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.794166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.794367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.794401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.794589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.794640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.794846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.794896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.795069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.795120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.578 qpair failed and we were unable to recover it. 00:36:52.578 [2024-07-10 14:39:01.795326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.578 [2024-07-10 14:39:01.795359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.795556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.795608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.795757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.795792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.795976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.796027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.796178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.796211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.796422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.796474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.796646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.796697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.796948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.797001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.797178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.797211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.797378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.797417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.797673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.797724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.797928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.797981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.798136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.798170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.798358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.798391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.798630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.798681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.799010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.799063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.799255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.799290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.799503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.799555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.799826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.799876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.800053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.800106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.800283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.800317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.800532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.800584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.800783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.800846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.801167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.801221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.801411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.801460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.801671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.801730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.801943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.801994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.802257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.802314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.802510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.802561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.802797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.802849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.803021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.803078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.803270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.803302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.803497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.803549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.803789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.803841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.804073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.804124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.804325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.804358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.804565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.804617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.804831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.804881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.805100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.805152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.805355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.579 [2024-07-10 14:39:01.805388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.579 qpair failed and we were unable to recover it. 00:36:52.579 [2024-07-10 14:39:01.805599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.805650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.805903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.805954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.806149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.806201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.806374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.806406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.806610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.806661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.806908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.806958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.807245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.807294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.807496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.807531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.807723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.807758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.807933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.807983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.808174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.808223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.808398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.808438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.808619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.808652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.808836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.808869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.809076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.809126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.809389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.809422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.809710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.809761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.809973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.810023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.810249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.810299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.810495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.810548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.810711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.810761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.810999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.811051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.811225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.811258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.811472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.811507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.811717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.811770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.812020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.812072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.812237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.812270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.812487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.812525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.812797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.812850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.813042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.813094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.813308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.813341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.813548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.813600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.813771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.813828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.813996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.580 [2024-07-10 14:39:01.814046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.580 qpair failed and we were unable to recover it. 00:36:52.580 [2024-07-10 14:39:01.814226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.814260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.814518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.814571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.814771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.814826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.815013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.815065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.815270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.815304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.815482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.815516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.815677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.815728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.815939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.815991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.816193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.816226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.816367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.816400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.816673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.816725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.816965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.817017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.817199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.817241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.817419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.817460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.817672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.817706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.817886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.817936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.818172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.818224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.818383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.818415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.818618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.818668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.818908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.818958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.819160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.819211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.819396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.819443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.819711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.819777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.819991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.820030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.820230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.820268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.820503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.820537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.820716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.820753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.820946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.820982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.821148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.821185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.821389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.821422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.821605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.821638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.821808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.821844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.822066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.822102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.822380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.822443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.822670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.822703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.822871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.822909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.823077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.823139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.823360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.823396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.823621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.823670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.823856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.581 [2024-07-10 14:39:01.823910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.581 qpair failed and we were unable to recover it. 00:36:52.581 [2024-07-10 14:39:01.824078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.824130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.824312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.824345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.824550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.824589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.824791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.824843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.825033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.825086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.825269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.825302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.825572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.825624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.825869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.825921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.826197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.826234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.826507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.826540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.826774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.826824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.827031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.827082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.827285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.827337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.827519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.827553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.827757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.827808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.828050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.828101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.828324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.828358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.829270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.829310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.829520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.829574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.829853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.829905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.830151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.830184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.830327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.830360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.830539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.830591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.830773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.830826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.831024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.831074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.831253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.831286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.831496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.831529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.831713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.831747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.831924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.831957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.832144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.832177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.832383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.832417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.832697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.832730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.832952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.833003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.833237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.833288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.833453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.833499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.833717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.833751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.833980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.834032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.834236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.834270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.834452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.834486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.582 qpair failed and we were unable to recover it. 00:36:52.582 [2024-07-10 14:39:01.834724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.582 [2024-07-10 14:39:01.834775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.835015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.835067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.835286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.835318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.835524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.835578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.835792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.835842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.836059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.836109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.836335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.836367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.837236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.837273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.837516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.837569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.837769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.837822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.838087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.838143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.838350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.838384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.839335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.839372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.839596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.839656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.840019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.840074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.840287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.840320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.840551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.840602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.840800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.840852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.841221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.841275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.841486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.841538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.841714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.841746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.841916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.841949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.842154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.842187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.842367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.842400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.842617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.842669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.842878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.842928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.843168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.843219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.843453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.843486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.843701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.843750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.843998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.844049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.844287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.844342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.844574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.844627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.844879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.844935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.845300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.845351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.845543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.845577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.845785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.845835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.583 [2024-07-10 14:39:01.846045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.583 [2024-07-10 14:39:01.846095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.583 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.846293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.846327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.846537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.846590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.846842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.846893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.847247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.847298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.847516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.847567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.847779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.847829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.848033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.848088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.848281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.848314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.848477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.848511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.848779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.848840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.849084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.849136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.849324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.849357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.849593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.849646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.849877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.849927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.850157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.850208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.850438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.850472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.850664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.850723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.850900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.850951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.851176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.851228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.851459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.851493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.851740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.851790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.852068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.852117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.852377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.852410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.852675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.852727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.852993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.853045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.853267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.853317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.853504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.853562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.853766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.853826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.854049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.854099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.854307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.854340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.854545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.854598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.854826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.854861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.855056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.855107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.855303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.855344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.855543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.855594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.855822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.855872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.856106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.856143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.856338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.856371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.856560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.856614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.584 [2024-07-10 14:39:01.856875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.584 [2024-07-10 14:39:01.856930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.584 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.857103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.857164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.857346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.857379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.857560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.857612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.857822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.857880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.858059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.858109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.858292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.858335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.858514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.858570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.858748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.858799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.859038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.859089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.859293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.859327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.859520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.859573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.859773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.859823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.860014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.860066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.860239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.860272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.860461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.860495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.860705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.860766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.860975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.861025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.861215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.861248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.861403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.861450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.861660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.861710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.861925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.861976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.862300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.862369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.862619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.862671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.862885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.862936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.863176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.863226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.863411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.863464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.863610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.863644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.863838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.863889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.864117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.864169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.864396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.864438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.864650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.864701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.864910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.864961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.865190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.865241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.585 qpair failed and we were unable to recover it. 00:36:52.585 [2024-07-10 14:39:01.865400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.585 [2024-07-10 14:39:01.865443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.865623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.865656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.865896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.865953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.866190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.866241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.866434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.866467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.866650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.866683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.866890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.866942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.867212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.867263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.867539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.867594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.867817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.867869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.868065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.868117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.868296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.868329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.868563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.868613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.868838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.868901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.869112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.869164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.869368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.869401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.869622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.869673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.869852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.869904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.870128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.870179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.870340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.870374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.870582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.870633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.870863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.870914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.871115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.871172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.871320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.871352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.871580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.871632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.871875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.871925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.872115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.872149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.872416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.872455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.872690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.872747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.872951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.873002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.873208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.873259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.873476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.873535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.873741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.873791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.873963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.874016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.874206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.874239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.874445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.874483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.874850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.874907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.875107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.875158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.875323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.586 [2024-07-10 14:39:01.875357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.586 qpair failed and we were unable to recover it. 00:36:52.586 [2024-07-10 14:39:01.875577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.875628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.875875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.875928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.876139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.876199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.876468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.876509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.876772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.876829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.877070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.877125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.877337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.877370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.877594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.877646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.877851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.877901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.878174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.878224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.878405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.878445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.878707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.878764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.879041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.879093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.879299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.879332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.879540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.879596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.879769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.879819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.880046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.880097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.880280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.880314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.880553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.880604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.880791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.880843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.881010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.881061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.881271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.881304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.881536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.881588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.881767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.881818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.882089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.882142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.882322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.882355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.882579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.882631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.882830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.882880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.883105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.883156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.883309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.883343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.883555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.883608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.883854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.883903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.884116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.884150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.884327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.884360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.884587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.884638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.884842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.587 [2024-07-10 14:39:01.884892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.587 qpair failed and we were unable to recover it. 00:36:52.587 [2024-07-10 14:39:01.885097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.885146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.885326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.885361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.885566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.885617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.885820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.885871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.886100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.886150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.886329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.886362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.886567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.886620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.886810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.886869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.887035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.887087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.887285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.887319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.887496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.887548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.887754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.887814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.888043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.888093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.888294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.888327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.888510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.888562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.888741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.888775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.888976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.889026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.889207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.889240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.889419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.889464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.889671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.889722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.889925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.889994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.890317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.890380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.890569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.890622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.890835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.890886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.891091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.891143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.891349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.891382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.891552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.891604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.891809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.891860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.892069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.892119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.892275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.892308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.892531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.892582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.892782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.892833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.893011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.893064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.893267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.893300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.893451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.893484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.893658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.893710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.893915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.893966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.894121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.894155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.588 [2024-07-10 14:39:01.894332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.588 [2024-07-10 14:39:01.894365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.588 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.894547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.894599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.894810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.894861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.895009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.895042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.895205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.895253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.895449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.895483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.895693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.895745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.896067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.896121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.896314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.896363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.896623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.896675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.896907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.896959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.897145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.897194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.897396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.897437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.897610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.897661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.897876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.897926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.898100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.898133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.898284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.898315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.898515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.898567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.898770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.898820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.899017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.899077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.899294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.899330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.899571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.899622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.899833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.899882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.900102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.900152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.900379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.900412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.900682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.900733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.901017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.901071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.901256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.901289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.901499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.901551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.901732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.901783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.902025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.902076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.902254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.902287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.902485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.902537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.902744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.902776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.589 [2024-07-10 14:39:01.902991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.589 [2024-07-10 14:39:01.903042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.589 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.903251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.903284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.903486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.903538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.903812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.903863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.904073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.904123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.904303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.904337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.904519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.904570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.904728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.904762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.904963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.904996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.905140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.905172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.905356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.905388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.905612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.905663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.905875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.905934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.906248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.906307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.906533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.906586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.906820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.906871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.907085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.907134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.907397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.907439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.907642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.907694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.907905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.907966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.908170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.908218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.908396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.908440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.908646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.908695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.908892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.908943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.909134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.909184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.909328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.909361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.909528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.909583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.909802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.909837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.910040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.910092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.910271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.910305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.910480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.910513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.910696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.910730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.590 [2024-07-10 14:39:01.910886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.590 [2024-07-10 14:39:01.910920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.590 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.911080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.911112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.911375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.911407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.911625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.911659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.911890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.911939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.912258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.912316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.912515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.912567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.912774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.912824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.913044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.913096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.913285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.913320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.913480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.913513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.913693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.913731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.913930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.913980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.914135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.914168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.914320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.914353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.914569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.914620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.914805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.914857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.915049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.915100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.915257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.915291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.915496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.915548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.915726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.915777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.915983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.916033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.916230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.916263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.916492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.916526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.916723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.916756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.916966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.917017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.917219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.917251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.917440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.917479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.917663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.917715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.917931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.917982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.918192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.918225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.918379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.918412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.918614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.918667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.918866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.918917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.919095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.919137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.919320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.919362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.919561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.919614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.919789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.919822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.591 qpair failed and we were unable to recover it. 00:36:52.591 [2024-07-10 14:39:01.920062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.591 [2024-07-10 14:39:01.920113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.920331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.920364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.920600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.920657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.920876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.920916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.921116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.921155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.921360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.921397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.921571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.921609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.921811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.921848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.922081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.922118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.922284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.922321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.922493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.922527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.922693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.922745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.922947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.922995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.923192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.923230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.923403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.923443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.923642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.923676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.923880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.923917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.924081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.924118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.924314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.924351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.924549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.924583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.924736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.924778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.924930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.924964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.925165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.925202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.925379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.925418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.925608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.925641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.925804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.925838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.926006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.926043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.926220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.926257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.926470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.926512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.926687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.926720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.926906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.926940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.927120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.927154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.927325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.927359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.927520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.927554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.927708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.927743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.927942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.927993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.592 [2024-07-10 14:39:01.928184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.592 [2024-07-10 14:39:01.928240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.592 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.928551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.928585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.928781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.928857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.929132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.929188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.929394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.929434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.929635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.929668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.929857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.929890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.930104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.930154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.930337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.930380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.930581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.930632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.930803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.930856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.931059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.931110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.931326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.931358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.931550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.931601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.931842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.931894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.932089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.932133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.932336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.932382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.932620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.932659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.932946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.932983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.933175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.933213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.933403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.933448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.933649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.933682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.933896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.933932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.934262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.934320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.934564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.934597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.934774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.934811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.935014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.935066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.935444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.935523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.935702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.935735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.935974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.936025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.936230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.936280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.936469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.936505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.936684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.936718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.936995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.937045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.937247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.937298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.937481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.937514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.937746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.937796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.938000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.593 [2024-07-10 14:39:01.938051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.593 qpair failed and we were unable to recover it. 00:36:52.593 [2024-07-10 14:39:01.938240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.938273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.938455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.938506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.938720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.938771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.938989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.939038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.939220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.939252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.939516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.939549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.939777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.939827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.940071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.940121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.940280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.940312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.940521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.940573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.940750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.940800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.941003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.941053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.941207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.941240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.941390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.941423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.941646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.941679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.941869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.941902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.942085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.942118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.942306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.942339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.942514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.942566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.942789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.942841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.943202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.943258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.943406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.943446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.943666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.943719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.943937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.943988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.944147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.944181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.944360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.944394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.944614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.944665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.944889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.944940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.945096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.945129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.945313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.945355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.945565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.945617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.945853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.945903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.946168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.946201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.946382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.594 [2024-07-10 14:39:01.946415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.594 qpair failed and we were unable to recover it. 00:36:52.594 [2024-07-10 14:39:01.946629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.946680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.946893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.946942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.947153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.947186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.947366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.947399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.947588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.947639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.947846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.947895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.948251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.948317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.948522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.948573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.948743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.948804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.949014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.949065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.949271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.949304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.949495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.949530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.949765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.949815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.950018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.950068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.950274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.950306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.950474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.950511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.950690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.950741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.950951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.951000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.951154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.951186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.951390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.951422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.951634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.951684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.951918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.951968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.952127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.952162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.952306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.952339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.952558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.952608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.952783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.952834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.953027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.953080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.953263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.953295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.953495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.953546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.953755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.953804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.954010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.954060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.954255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.954287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.954482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.954518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.954743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.954793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.954994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.955044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.955227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.955263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.955497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.955549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.955765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.955816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.956048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.956098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.956268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.956300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.595 [2024-07-10 14:39:01.956494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.595 [2024-07-10 14:39:01.956545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.595 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.956709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.956741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.956921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.956954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.957114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.957146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.957353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.957386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.957572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.957623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.957837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.957869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.958097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.958158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.958340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.958373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.958626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.958677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.958909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.958959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.959256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.959309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.959511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.959561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.959764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.959815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.959978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.960030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.960244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.960277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.960457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.960494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.960693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.960753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.960954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.961005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.961183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.961217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.961389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.961422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.961675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.961736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.961916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.961966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.962148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.962198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.962351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.962385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.962614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.962665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.962859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.962910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.963118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.963152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.963352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.963385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.963610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.963662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.963895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.963954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.964110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.964143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.964320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.964353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.964534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.964586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.964803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.964838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.965176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.965253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.965458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.965491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.965657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.965709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.965929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.965980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.966136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.966170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.596 [2024-07-10 14:39:01.966351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.596 [2024-07-10 14:39:01.966383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.596 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.966612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.966647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.966842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.966893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.967150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.967212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.967415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.967456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.967662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.967713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.967919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.967970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.968186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.968237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.968444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.968477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.968652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.968703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.968913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.968965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.969272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.969335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.969509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.969562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.969769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.969819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.969990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.970046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.970250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.970283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.970507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.970559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.970762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.970813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.971007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.971058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.971243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.971276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.971473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.971511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.971710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.971763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.971996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.972046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.972249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.972281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.972503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.972554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.972732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.972782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.972986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.973037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.973215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.973247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.973408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.973449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.973655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.973688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.973864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.973897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.974137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.974188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.974389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.974422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.974668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.974720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.974918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.974968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.975244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.975312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.975512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.975564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.975784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.975818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.597 [2024-07-10 14:39:01.976012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.597 [2024-07-10 14:39:01.976063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.597 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.976261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.976294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.976488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.976541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.976728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.976761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.976936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.976986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.977145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.977178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.977353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.977386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.977624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.977677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.977858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.977908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.978267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.978300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.978532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.978583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.978822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.978872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.979070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.979121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.979316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.979349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.979548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.979599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.979846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.979896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.980111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.980154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.980300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.980332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.980574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.980625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.980864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.980914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.981114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.981164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.981369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.981401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.981597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.981632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.981866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.981917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.982122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.982172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.982328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.982362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.982560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.982610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.982816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.982866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.983096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.983146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.983294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.983327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.983518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.983569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.983745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.983796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.984027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.984077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.984262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.984294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.984514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.984566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.984804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.984853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.985055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.985105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.985254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.985291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.985494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.598 [2024-07-10 14:39:01.985546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.598 qpair failed and we were unable to recover it. 00:36:52.598 [2024-07-10 14:39:01.985723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.985756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.985938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.985972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.986148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.986182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.986338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.986372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.986535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.986568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.986752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.986785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.986971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.987004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.987179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.987213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.987398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.987438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.987638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.987672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.987841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.987891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.988063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.988113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.988299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.988331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.988532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.988584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.988859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.988923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.989121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.989171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.989359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.989392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.989567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.989619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.989832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.989883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.990086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.990138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.990293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.990326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.990528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.990580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.990918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.990970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.991195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.991246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.991455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.991489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.991726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.991776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.992020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.992086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.992273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.992306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.599 qpair failed and we were unable to recover it. 00:36:52.599 [2024-07-10 14:39:01.992508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.599 [2024-07-10 14:39:01.992560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.992738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.992770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.992950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.992984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.993188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.993221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.993378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.993412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.993612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.993646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.993829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.993879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.994085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.994118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.994297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.994330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.994546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.994597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.994797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.994853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.995059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.995108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.995318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.995351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.995574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.995634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.995879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.995929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.996138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.996190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.996364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.996397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.996578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.996629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.996826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.996877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.997064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.997116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.997272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.997305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.997503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.997555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.997797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.997848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.998030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.998081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.998264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.998299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.998486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.998537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.998695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.998730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.998962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.999014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.999216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.999249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.999432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.999468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.999624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.999657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:01.999809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:01.999842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:02.000037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:02.000070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:02.000261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:02.000294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:02.000472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:02.000506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:02.000675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.600 [2024-07-10 14:39:02.000728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.600 qpair failed and we were unable to recover it. 00:36:52.600 [2024-07-10 14:39:02.000961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.001012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.001209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.001242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.001420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.001462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.001645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.001701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.001938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.001989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.002178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.002229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.002440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.002474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.002651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.002684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.002883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.002933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.003105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.003156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.003333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.003365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.003603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.003654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.003859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.003910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.004109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.004161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.004316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.004352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.004541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.004592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.004821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.004872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.005083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.005134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.005309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.005343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.005540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.005591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.005764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.005814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.006042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.006093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.006245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.006279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.006509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.006561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.006746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.006798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.006994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.007045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.007222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.007255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.007412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.007457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.007636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.007686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.007916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.007966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.008229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.008283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.008447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.008481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.008712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.008762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.008970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.009021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.009167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.009199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.009351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.009386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.009590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.009641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.601 qpair failed and we were unable to recover it. 00:36:52.601 [2024-07-10 14:39:02.009870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.601 [2024-07-10 14:39:02.009922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.010180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.010236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.010409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.010452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.010661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.010712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.010918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.010978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.011176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.011226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.011401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.011441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.011641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.011692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.011889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.011940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.012200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.012256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.012441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.012474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.012669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.012721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.012927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.012976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.013174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.013226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.013401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.013439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.013638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.013689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.013920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.013970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.014284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.014350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.014513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.014547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.014748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.014798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.014997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.015048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.015284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.015317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.015490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.015541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.015745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.015795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.016008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.016060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.016221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.016256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.016451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.016485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.016688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.016738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.016915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.016965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.017146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.017197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.017411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.017453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.017639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.017691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.017902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.017953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.018214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.602 [2024-07-10 14:39:02.018264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.602 qpair failed and we were unable to recover it. 00:36:52.602 [2024-07-10 14:39:02.018476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.018509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.018703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.018754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.018982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.019032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.019240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.019272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.019448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.019498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.019734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.019784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.019991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.020041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.020242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.020275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.020456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.020489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.020719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.020768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.020976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.021027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.021234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.021267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.021450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.021501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.021684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.021735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.021944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.021994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.022201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.022233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.022415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.022456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.022662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.022713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.022919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.022974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.023173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.023224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.023408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.023450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.023627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.023678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.023875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.023926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.024126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.024181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.024327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.024361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.024569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.024620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.024829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.024881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.025160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.025218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.025443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.025476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.025641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.025693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.025900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.025950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.026126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.026176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.026349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.026382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.026618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.026670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.026843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.026902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.027114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.027165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.603 [2024-07-10 14:39:02.027350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.603 [2024-07-10 14:39:02.027382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.603 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.027599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.027650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.027890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.027941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.028182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.028216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.028368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.028400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.028655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.028710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.028880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.028932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.029159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.029209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.029406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.029447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.029657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.029708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.029914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.029964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.030257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.030312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.030538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.030590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.030791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.030841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.031047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.031098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.031303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.031337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.031542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.031594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.031768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.031821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.032035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.032086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.032270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.032304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.032530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.032581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.032784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.032835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.033017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.033049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.033200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.033234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.033414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.033454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.033687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.033737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.033919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.033969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.034178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.034233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.034415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.034456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.034626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.034678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.034876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.604 [2024-07-10 14:39:02.034926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.604 qpair failed and we were unable to recover it. 00:36:52.604 [2024-07-10 14:39:02.035174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.035224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.035403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.035443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.035681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.035732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.035938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.035989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.036285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.036336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.036552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.036603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.036772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.036824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.037015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.037065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.037221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.037256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.037435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.037468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.037643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.037693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.037922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.037974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.038257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.038308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.038511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.038562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.038745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.038777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.038930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.038964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.039169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.039201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.039387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.039420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.039655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.039707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.039918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.039973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.040151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.040203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.040355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.040389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.040619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.040674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.040888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.040939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.041090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.041123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.041332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.041367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.041590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.041642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.041869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.041923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.042246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.042308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.042539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.605 [2024-07-10 14:39:02.042604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.605 qpair failed and we were unable to recover it. 00:36:52.605 [2024-07-10 14:39:02.042867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.880 [2024-07-10 14:39:02.042920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.880 qpair failed and we were unable to recover it. 00:36:52.880 [2024-07-10 14:39:02.043131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.880 [2024-07-10 14:39:02.043192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.880 qpair failed and we were unable to recover it. 00:36:52.880 [2024-07-10 14:39:02.043351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.880 [2024-07-10 14:39:02.043384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.880 qpair failed and we were unable to recover it. 00:36:52.880 [2024-07-10 14:39:02.043604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.880 [2024-07-10 14:39:02.043670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.880 qpair failed and we were unable to recover it. 00:36:52.880 [2024-07-10 14:39:02.043882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.880 [2024-07-10 14:39:02.043946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.880 qpair failed and we were unable to recover it. 00:36:52.880 [2024-07-10 14:39:02.044168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.880 [2024-07-10 14:39:02.044230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.880 qpair failed and we were unable to recover it. 00:36:52.880 [2024-07-10 14:39:02.044415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.880 [2024-07-10 14:39:02.044488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.880 qpair failed and we were unable to recover it. 00:36:52.880 [2024-07-10 14:39:02.044718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.880 [2024-07-10 14:39:02.044777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.880 qpair failed and we were unable to recover it. 00:36:52.880 [2024-07-10 14:39:02.045000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.880 [2024-07-10 14:39:02.045063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.880 qpair failed and we were unable to recover it. 00:36:52.880 [2024-07-10 14:39:02.045249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.880 [2024-07-10 14:39:02.045293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.880 qpair failed and we were unable to recover it. 00:36:52.880 [2024-07-10 14:39:02.045492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.880 [2024-07-10 14:39:02.045555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.880 qpair failed and we were unable to recover it. 00:36:52.880 [2024-07-10 14:39:02.045757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.880 [2024-07-10 14:39:02.045819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.880 qpair failed and we were unable to recover it. 00:36:52.880 [2024-07-10 14:39:02.046069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.880 [2024-07-10 14:39:02.046123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.880 qpair failed and we were unable to recover it. 00:36:52.880 [2024-07-10 14:39:02.046269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.880 [2024-07-10 14:39:02.046302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.880 qpair failed and we were unable to recover it. 00:36:52.880 [2024-07-10 14:39:02.046512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.880 [2024-07-10 14:39:02.046546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.880 qpair failed and we were unable to recover it. 00:36:52.880 [2024-07-10 14:39:02.046736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.880 [2024-07-10 14:39:02.046770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.880 qpair failed and we were unable to recover it. 00:36:52.880 [2024-07-10 14:39:02.046937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.880 [2024-07-10 14:39:02.046988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.880 qpair failed and we were unable to recover it. 00:36:52.880 [2024-07-10 14:39:02.047160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.880 [2024-07-10 14:39:02.047192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.880 qpair failed and we were unable to recover it. 00:36:52.880 [2024-07-10 14:39:02.047369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.880 [2024-07-10 14:39:02.047401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.880 qpair failed and we were unable to recover it. 00:36:52.880 [2024-07-10 14:39:02.047562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.047596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.047800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.047850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.048086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.048137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.048343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.048377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.048587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.048639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.048855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.048890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.049099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.049151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.049331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.049365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.049607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.049659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.049896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.049947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.050113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.050164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.050365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.050398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.050623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.050674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.050873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.050924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.051167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.051201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.051372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.051405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.051621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.051673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.051875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.051925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.052097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.052152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.052338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.052372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.052573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.052625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.052858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.052908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.053116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.053166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.053356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.053390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.053582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.053634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.053872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.053924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.054158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.054208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.054439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.054484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.054708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.054759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.054994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.055045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.055235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.055274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.055452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.055490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.055661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.055712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.055916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.055967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.056143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.056176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.056331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.056364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.056563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.056614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.056811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.056862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.057069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.057122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.057305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.881 [2024-07-10 14:39:02.057338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.881 qpair failed and we were unable to recover it. 00:36:52.881 [2024-07-10 14:39:02.057536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.057587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.057822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.057873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.058113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.058147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.058320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.058353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.058575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.058626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.058809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.058860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.059010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.059053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.059199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.059232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.059379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.059413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.059658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.059710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.059911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.059963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.060167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.060199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.060380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.060413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.060630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.060680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.060868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.060903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.061214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.061270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.061558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.061595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.061793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.061844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.062033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.062083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.062290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.062323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.062522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.062573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.062747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.062798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.062952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.062984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.063156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.063189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.063336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.063369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.063562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.063613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.063816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.063867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.064060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.064111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.064319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.064352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.064590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.064641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.064848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.064899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.065094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.065144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.065326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.065359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.065567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.065602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.065815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.065866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.066138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.066194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.066373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.066406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.066577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.066611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.066785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.066835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.882 qpair failed and we were unable to recover it. 00:36:52.882 [2024-07-10 14:39:02.067076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.882 [2024-07-10 14:39:02.067126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.067305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.067338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.067542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.067593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.067826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.067877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.068199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.068274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.068498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.068550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.068744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.068801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.069026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.069077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.069271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.069304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.069527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.069579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.069784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.069835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.070057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.070109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.070312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.070345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.070517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.070569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.070801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.070852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.071081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.071135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.071285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.071319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.071489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.071541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.071778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.071829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.072066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.072117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.072264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.072298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.072492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.072543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.072730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.072763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.072978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.073013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.073189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.073224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.073405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.073448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.073634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.073684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.073870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.073903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.074104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.074154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.074314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.074348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.074583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.074635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.074815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.074875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.075047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.075099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.075284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.075317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.075516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.075568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.075768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.075820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.075996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.076029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.076204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.076236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.076416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.883 [2024-07-10 14:39:02.076456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.883 qpair failed and we were unable to recover it. 00:36:52.883 [2024-07-10 14:39:02.076658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.076717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.077008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.077081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.077240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.077274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.077502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.077554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.077706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.077743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.077958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.078010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.078194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.078226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.078402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.078445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.078653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.078715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.078915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.078965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.079183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.079216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.079391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.079433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.079611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.079663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.079918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.079951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.080151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.080201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.080369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.080402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.080589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.080627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.080832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.080864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.081068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.081118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.081299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.081333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.081527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.081579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.081749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.081782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.081953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.082004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.082209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.082242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.082418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.082474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.082671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.082723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.082952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.083003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.083214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.083247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.083451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.083491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.083668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.083723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.083962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.084012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.084210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.884 [2024-07-10 14:39:02.084260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.884 qpair failed and we were unable to recover it. 00:36:52.884 [2024-07-10 14:39:02.084489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.084540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.084788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.084846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.085073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.085124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.085302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.085335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.085512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.085564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.085741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.085791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.086014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.086064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.086268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.086306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.086477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.086514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.086730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.086781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.087017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.087068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.087250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.087284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.087483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.087539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.087698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.087731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.087906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.087958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.088194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.088245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.088415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.088456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.088658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.088718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.088894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.088945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.089185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.089235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.089445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.089491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.089700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.089761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.089938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.089988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.090205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.090256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.090461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.090508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.090741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.090791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.090987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.091037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.091192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.091226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.091403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.091444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.091657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.091718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.091925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.091976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.092151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.092202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.092381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.092415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.092660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.092719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.092928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.092979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.093176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.093226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.093380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.093414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.093665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.093720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.093946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.885 [2024-07-10 14:39:02.093996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.885 qpair failed and we were unable to recover it. 00:36:52.885 [2024-07-10 14:39:02.094199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.094250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.094474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.094508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.094705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.094760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.095007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.095059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.095252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.095303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.095516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.095549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.095850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.095915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.096140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.096190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.096392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.096432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.096623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.096657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.096888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.096938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.097135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.097186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.097340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.097375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.097587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.097638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.097976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.098039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.098219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.098252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.098452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.098485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.098678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.098729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.099047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.099112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.099318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.099350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.099560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.099612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.099802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.099854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.100051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.100101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.100258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.100291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.100447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.100480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.100676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.100732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.100911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.100963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.101167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.101200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.101376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.101418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.101623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.101675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.101972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.102048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.102237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.102270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.102445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.102480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.102677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.102716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.102907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.102958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.103140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.103173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.103377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.103418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.103610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.103661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.103879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.103931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.886 qpair failed and we were unable to recover it. 00:36:52.886 [2024-07-10 14:39:02.104093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.886 [2024-07-10 14:39:02.104127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.104327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.104360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.104564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.104615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.104785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.104835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.105035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.105087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.105264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.105296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.105522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.105573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.105796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.105847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.105994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.106027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.106229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.106263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.106469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.106528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.106679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.106717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.106868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.106902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.107055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.107087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.107259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.107292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.107438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.107471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.107673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.107706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.107916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.107966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.108146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.108179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.108327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.108361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.108597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.108647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.108824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.108875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.109050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.109101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.109273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.109305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.109490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.109546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.109719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.109772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.109929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.109965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.110146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.110178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.110347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.110379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.110604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.110637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.110846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.110898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.111068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.111123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.111322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.111355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.111577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.111628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.111848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.111900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.112207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.112271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.112504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.112555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.112787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.112837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.113033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.113085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.113266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.113301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.887 [2024-07-10 14:39:02.113463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.887 [2024-07-10 14:39:02.113496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.887 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.113676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.113726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.113911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.113944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.114097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.114129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.114285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.114317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.114510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.114561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.114737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.114788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.114971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.115003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.115177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.115209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.115388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.115421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.115628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.115679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.116002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.116060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.116272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.116306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.116506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.116559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.116765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.116816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.117019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.117070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.117252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.117286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.117515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.117572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.117751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.117783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.117961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.117995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.118150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.118184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.118367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.118400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.118618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.118651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.118931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.118993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.119145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.119177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.119350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.119383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.119612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.119667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.119880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.119932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.120133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.120184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.120361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.120394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.120611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.120661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.120891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.120942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.121150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.121200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.121380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.121417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.121603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.121666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.121883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.121934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.122105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.122157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.122331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.122364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.122551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.122607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.122782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.122833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.888 [2024-07-10 14:39:02.123066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.888 [2024-07-10 14:39:02.123117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.888 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.123296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.123329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.123535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.123588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.123838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.123889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.124060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.124111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.124261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.124295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.124513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.124564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.124748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.124781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.124986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.125019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.125196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.125230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.125403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.125443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.125608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.125658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.125856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.125907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.126122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.126173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.126330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.126363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.126548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.126598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.126827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.126878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.127052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.127102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.127271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.127304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.127459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.127497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.127675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.127726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.127935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.127984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.128192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.128225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.128472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.128524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.128736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.128787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.128960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.129010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.129217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.129253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.129415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.129455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.129657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.129708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.129911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.129960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.130143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.130176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.130353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.130386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.130576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.130627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.130825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.130875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.131106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.889 [2024-07-10 14:39:02.131156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.889 qpair failed and we were unable to recover it. 00:36:52.889 [2024-07-10 14:39:02.131371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.131404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.131663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.131714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.131944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.131994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.132231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.132281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.132473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.132507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.132714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.132767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.132967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.133018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.133191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.133224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.133378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.133411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.133626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.133677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.133902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.133953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.134121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.134171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.134314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.134347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.134516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.134566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.134765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.134815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.134982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.135031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.135209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.135241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.135418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.135462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.135659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.135715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.135920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.135971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.136280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.136315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.136538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.136589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.136804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.136855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.137055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.137115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.137289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.137322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.137498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.137551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.137754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.137804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.137991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.138042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.138223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.138255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.138461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.138495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.138689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.138740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.138942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.138997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.139195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.139245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.139393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.139432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.139639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.139690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.139847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.139880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.140036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.140068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.140218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.140251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.140471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.140535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.890 qpair failed and we were unable to recover it. 00:36:52.890 [2024-07-10 14:39:02.140743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.890 [2024-07-10 14:39:02.140793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.140992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.141043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.141223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.141256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.141473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.141525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.141735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.141784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.142014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.142064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.142284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.142316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.142514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.142566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.142734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.142789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.143116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.143176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.143379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.143412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.143628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.143679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.143894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.143944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.144115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.144148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.144302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.144334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.144526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.144576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.144795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.144829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.145050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.145083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.145263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.145295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.145487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.145538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.145767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.145818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.146000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.146032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.146231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.146264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.146430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.146466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.146665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.146716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.147003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.147058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.147275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.147308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.147503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.147554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.147746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.147797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.148076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.148133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.148313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.148345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.148558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.148609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.148815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.148870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.149194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.149256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.149441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.149474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.149649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.149701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.149903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.149953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.150158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.150208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.150356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.150390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.891 [2024-07-10 14:39:02.150568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.891 [2024-07-10 14:39:02.150619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.891 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.150833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.150884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.151035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.151069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.151274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.151306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.151456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.151490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.151718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.151769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.152085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.152141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.152335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.152368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.152555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.152606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.152837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.152898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.153190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.153244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.153394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.153437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.153639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.153690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.153902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.153953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.154171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.154223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.154400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.154441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.154669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.154721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.154891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.154942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.155221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.155292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.155487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.155539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.155781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.155833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.156031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.156083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.156241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.156274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.156449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.156482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.156682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.156734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.156927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.156978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.157181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.157214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.157394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.157434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.157634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.157684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.157886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.157937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.158143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.158193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.158343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.158377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.158586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.158637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.158864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.158918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.159158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.159214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.159405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.159452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.159661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.159712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.159908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.159958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.160148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.160182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.160397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.160439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.892 [2024-07-10 14:39:02.160606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.892 [2024-07-10 14:39:02.160656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.892 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.160853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.160903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.161105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.161155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.161337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.161370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.161586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.161637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.161821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.161872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.162069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.162120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.162322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.162355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.162578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.162613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.162846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.162896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.163216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.163270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.163503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.163554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.163788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.163838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.164039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.164089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.164295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.164328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.164524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.164576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.164755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.164806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.165002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.165052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.165227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.165261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.165506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.165558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.165780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.165832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.166053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.166105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.166264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.166298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.166508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.166541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.166729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.166761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.166989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.167041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.167220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.167253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.167437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.167471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.167650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.167683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.167861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.167912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.168116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.168166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.168316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.168349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.168556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.168608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.168804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.168867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.169040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.893 [2024-07-10 14:39:02.169091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.893 qpair failed and we were unable to recover it. 00:36:52.893 [2024-07-10 14:39:02.169245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.169277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.169505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.169555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.169758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.169808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.169993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.170026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.170203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.170237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.170446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.170480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.170656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.170707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.170874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.170929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.171135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.171168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.171320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.171352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.171555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.171606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.171811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.171861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.172035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.172085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.172308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.172357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.172571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.172610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.172793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.172840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.173040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.173076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.173351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.173388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.173582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.173614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.173823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.173860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.174015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.174051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.174300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.174336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.174525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.174558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.174743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.174786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.174987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.175022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.175292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.175358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.175552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.175589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.175808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.175859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.176063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.176114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.176291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.176324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.176514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.176567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.176764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.176814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.177032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.177083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.177270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.177304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.177469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.177503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.177681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.177714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.177914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.177964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.178149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.178183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.178367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.178405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.894 [2024-07-10 14:39:02.178592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.894 [2024-07-10 14:39:02.178644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.894 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.178885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.178936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.179226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.179283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.179482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.179519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.179723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.179775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.179971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.180023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.180197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.180229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.180374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.180408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.180616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.180668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.180881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.180932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.181136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.181169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.181347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.181379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.181622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.181674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.181926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.181978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.182174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.182225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.182435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.182469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.182738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.182789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.183026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.183076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.183260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.183292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.183455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.183490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.183697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.183746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.183984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.184033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.184269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.184346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.184573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.184624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.184903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.184964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.185200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.185251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.185411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.185452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.185641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.185674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.185857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.185908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.186113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.186162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.186363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.186396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.186578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.186611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.186788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.186839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.187007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.187059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.187242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.187276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.187434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.187496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.187706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.187756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.187952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.188003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.188208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.188241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.188395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.895 [2024-07-10 14:39:02.188441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.895 qpair failed and we were unable to recover it. 00:36:52.895 [2024-07-10 14:39:02.188622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.188673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.188852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.188902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.189098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.189147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.189308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.189341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.189549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.189601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.189804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.189855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.190049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.190099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.190315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.190348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.190525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.190578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.190758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.190808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.191009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.191059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.191232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.191265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.191449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.191483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.191762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.191816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.192049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.192100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.192287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.192321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.192507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.192542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.192715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.192765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.192958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.193008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.193159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.193192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.193364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.193396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.193604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.193655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.193895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.193945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.194150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.194182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.194361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.194393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.194633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.194683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.194860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.194922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.195161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.195211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.195389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.195422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.195619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.195670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.195884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.195935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.196136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.196186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.196369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.196401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.196620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.196670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.196881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.196932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.197167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.197218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.197433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.197467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.197697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.197757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.197977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.198027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.198262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.896 [2024-07-10 14:39:02.198312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.896 qpair failed and we were unable to recover it. 00:36:52.896 [2024-07-10 14:39:02.198509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.198542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.198753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.198803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.198995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.199046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.199281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.199314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.199515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.199566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.199770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.199829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.200054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.200104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.200287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.200321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.200486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.200538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.200788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.200839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.201008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.201058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.201232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.201265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.201465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.201516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.201687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.201738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.201950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.201985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.202162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.202196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.202351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.202383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.202599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.202650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.202861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.202911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.203074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.203134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.203338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.203371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.203581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.203633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.203825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.203875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.204048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.204098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.204280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.204312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.204509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.204544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.204745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.204803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.205034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.205085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.205240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.205272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.205493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.205545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.205719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.205770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.205969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.206020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.206221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.206253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.206438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.206471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.206701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.206759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.206927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.206978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.207235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.207289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.207512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.207564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.207768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.207819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.897 [2024-07-10 14:39:02.208035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.897 [2024-07-10 14:39:02.208086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.897 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.208270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.208302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.208497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.208548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.208775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.208826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.209031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.209081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.209290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.209322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.209487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.209521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.209689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.209741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.209947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.209997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.210253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.210311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.210510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.210562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.210736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.210786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.210983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.211034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.211211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.211244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.211437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.211470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.211643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.211694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.211923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.211974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.212329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.212392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.212606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.212657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.212865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.212914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.213080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.213133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.213284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.213318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.213503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.213556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.213791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.213841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.214016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.214069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.214214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.214248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.214401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.214442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.214641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.214706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.214908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.214959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.215140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.215192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.215348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.215381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.215541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.215576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.215777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.898 [2024-07-10 14:39:02.215829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.898 qpair failed and we were unable to recover it. 00:36:52.898 [2024-07-10 14:39:02.216068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.216120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.216296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.216328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.216507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.216559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.216800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.216850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.217047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.217098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.217266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.217299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.217496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.217549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.217751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.217802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.218060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.218110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.218267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.218300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.218528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.218579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.218751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.218794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.219068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.219134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.219339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.219372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.219575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.219626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.219859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.219909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.220105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.220157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.220339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.220372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.220586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.220638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.220835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.220886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.221250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.221309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.221511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.221561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.221762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.221812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.221987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.222037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.222216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.222250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.222408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.222449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.222647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.222704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.222872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.222924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.223077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.223111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.223285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.223318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.223514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.223565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.223800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.223850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.223995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.224028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.224231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.224265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.224521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.224558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.224774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.224825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.225189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.225243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.225420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.225478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.225678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.899 [2024-07-10 14:39:02.225729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.899 qpair failed and we were unable to recover it. 00:36:52.899 [2024-07-10 14:39:02.225954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.226005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.226212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.226245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.226445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.226478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.226656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.226711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.226916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.226968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.227267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.227331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.227548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.227600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.227797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.227847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.228041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.228092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.228297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.228330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.228532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.228585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.228760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.228811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.228993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.229025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.229183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.229216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.229361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.229393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.229614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.229665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.229868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.229918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.230216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.230275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.230478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.230531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.230711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.230762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.230965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.231016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.231192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.231225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.231395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.231435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.231634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.231684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.231857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.231910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.232118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.232167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.232373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.232406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.232628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.232679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.232857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.232908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.233110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.233161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.233342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.233374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.233615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.233667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.233863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.233915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.234115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.234165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.234342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.234374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.234601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.234658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.234887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.234948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.235122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.235173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.235342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.900 [2024-07-10 14:39:02.235375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.900 qpair failed and we were unable to recover it. 00:36:52.900 [2024-07-10 14:39:02.235609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.235660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.235881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.235931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.236121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.236172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.236352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.236385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.236596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.236647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.236847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.236897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.237099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.237150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.237339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.237372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.237579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.237634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.237828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.237878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.238045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.238098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.238251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.238283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.238456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.238490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.238667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.238716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.238945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.238996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.239202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.239235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.239441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.239474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.239682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.239735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.239989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.240047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.240228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.240260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.240404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.240444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.240612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.240663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.240868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.240919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.241133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.241184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.241337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.241369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.241580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.241614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.241790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.241823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.242026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.242077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.242280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.242312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.242542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.242594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.242794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.242845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.243042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.243092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.243266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.243299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.243528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.243580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.243779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.243830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.244060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.244111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.244261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.244299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.244513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.244581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.244749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.901 [2024-07-10 14:39:02.244784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.901 qpair failed and we were unable to recover it. 00:36:52.901 [2024-07-10 14:39:02.244986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.245020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.245197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.245230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.245403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.245443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.245622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.245655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.245873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.245908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.246060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.246092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.246244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.246277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.246454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.246487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.246663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.246714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.246880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.246929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.247107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.247141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.247329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.247363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.247565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.247614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.247836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.247889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.248089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.248128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.248374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.248408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.248601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.248636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.248850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.248886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.249082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.249118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.249317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.249354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.249570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.249603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.249782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.249818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.250001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.250044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.250225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.250262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.250481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.250529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.250733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.250785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.251015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.251066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.251353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.251408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.251565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.251599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.251806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.251858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.252057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.252108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.902 [2024-07-10 14:39:02.252306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.902 [2024-07-10 14:39:02.252338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.902 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.252525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.252558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.252794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.252845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.253049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.253110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.253271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.253304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.253505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.253557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.253795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.253850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.254052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.254102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.254272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.254304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.254504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.254555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.254791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.254841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.255064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.255113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.255286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.255318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.255494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.255544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.255713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.255767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.255964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.256013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.256182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.256215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.256390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.256422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.256595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.256646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.256849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.256900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.257052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.257085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.257265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.257297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.257484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.257517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.257724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.257758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.257940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.257974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.258179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.258211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.258362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.258395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.258603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.258636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.258812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.258846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.259041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.259091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.259299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.259332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.259505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.259558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.259731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.259780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.260134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.260205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.260503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.260542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.260707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.260753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.260959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.260996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.261186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.261222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.261393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.903 [2024-07-10 14:39:02.261437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.903 qpair failed and we were unable to recover it. 00:36:52.903 [2024-07-10 14:39:02.261611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.261644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.261863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.261899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.262151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.262187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.262407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.262449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.262642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.262674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.262876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.262914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.263133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.263181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.263374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.263415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.263647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.263680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.263930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.263986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.264238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.264274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.264511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.264544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.264709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.264746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.264919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.264955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.265149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.265187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.265405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.265451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.265649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.265681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.265868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.265904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.266097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.266134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.266297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.266333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.266499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.266532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.266695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.266746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.266950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.266983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.267203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.267255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.267485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.267521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.267678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.267729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.267910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.267944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.268241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.268314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.268520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.268553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.268730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.268762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.269000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.269036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.269235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.269270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.269440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.269491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.269684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.269734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.269922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.269956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.270154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.270190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.270416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.270454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.270622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.904 [2024-07-10 14:39:02.270654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.904 qpair failed and we were unable to recover it. 00:36:52.904 [2024-07-10 14:39:02.270880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.270914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.271212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.271274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.271510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.271543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.271694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.271726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.271958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.272016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.272321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.272378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.272597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.272630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.272985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.273047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.273345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.273402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.273595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.273633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.273860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.273895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.274077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.274112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.274298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.274334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.274544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.274577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.274797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.274833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.275124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.275192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.275408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.275450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.275651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.275683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.275881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.275917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.276217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.276290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.276492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.276524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.276708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.276743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.276966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.277001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.277206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.277242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.277447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.277481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.277669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.277719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.277938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.277974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.278209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.278258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.278460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.278493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.278670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.278719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.278912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.278948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.279236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.279292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.279515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.279548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.279719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.279755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.279913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.279948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.280208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.280274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.280459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.280491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.280669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.280727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.905 [2024-07-10 14:39:02.280893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.905 [2024-07-10 14:39:02.280931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.905 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.281158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.281194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.281444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.281495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.281678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.281727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.281935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.281967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.282131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.282167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.282353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.282386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.282550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.282583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.282785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.282833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.283034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.283070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.283246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.283279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.283475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.283527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.283681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.283713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.283859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.283911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.284135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.284168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.284337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.284373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.284557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.284590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.284761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.284797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.284967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.285001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.285225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.285261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.285431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.285469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.285642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.285674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.285834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.285866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.286011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.286062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.286260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.286297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.286505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.286537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.286697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.286729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.286948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.286983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.287172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.287207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.287433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.287469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.287666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.287697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.287903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.287939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.288147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.288179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.288355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.288387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.288577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.288610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.288842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.288874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.289067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.289099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.289338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.289370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.289591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.289624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.906 qpair failed and we were unable to recover it. 00:36:52.906 [2024-07-10 14:39:02.289847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.906 [2024-07-10 14:39:02.289883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.290109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.290145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.290364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.290399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.290632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.290664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.290861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.290896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.291059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.291102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.291324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.291356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.291548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.291582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.291813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.291846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.292021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.292056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.292215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.292251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.292452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.292484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.292689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.292731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.292936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.292969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.293141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.293177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.293398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.293435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.293592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.293624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.293776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.293809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.294020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.294079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.294297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.294329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.294552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.294588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.294783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.294819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.294972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.295008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.295199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.295231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.295462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.295498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.295711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.295743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.295926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.295959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.296138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.296170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.296395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.296435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.296607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.296643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.296859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.296891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.297061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.297093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.297248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.297280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.297460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.297503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.907 [2024-07-10 14:39:02.297739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.907 [2024-07-10 14:39:02.297775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.907 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.297978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.298011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.298241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.298273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.298459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.298492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.298681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.298716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.298939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.298972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.299201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.299237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.299440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.299477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.299665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.299701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.299898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.299930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.300137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.300173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.300338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.300374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.300600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.300632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.300814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.300847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.301046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.301082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.301263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.301296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.301456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.301488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.301673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.301705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.301907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.301948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.302178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.302213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.302417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.302459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.302688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.302720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.302927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.302962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.303156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.303192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.303360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.303396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.303622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.303654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.303846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.303881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.304085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.304117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.304270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.304301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.304487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.304520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.304717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.304761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.304980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.305015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.305194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.305226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.305400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.305439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.305583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.305614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.305783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.305833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.306031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.306067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.306263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.306295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.306492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.306529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.908 [2024-07-10 14:39:02.306721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.908 [2024-07-10 14:39:02.306757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.908 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.307040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.307096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.307323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.307355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.307573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.307608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.307774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.307810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.307997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.308032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.308208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.308245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.308444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.308480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.308675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.308711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.308873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.308908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.309137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.309169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.309365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.309401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.309640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.309676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.309857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.309890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.310064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.310096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.310245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.310278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.310472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.310510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.310730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.310762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.310938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.310970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.311118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.311155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.311315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.311348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.311549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.311585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.311781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.311813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.311985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.312017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.312210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.312257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.312507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.312543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.312742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.312774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.313003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.313039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.313263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.313295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.313500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.313536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.313716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.313748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.313900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.313952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.314137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.314173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.314397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.314442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.314644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.314676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.314857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.314889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.315059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.315096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.315296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.315332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.315539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.315572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.315793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.909 [2024-07-10 14:39:02.315829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.909 qpair failed and we were unable to recover it. 00:36:52.909 [2024-07-10 14:39:02.316018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.316054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.316260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.316292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.316509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.316541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.316744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.316780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.316967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.317003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.317203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.317236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.317446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.317482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.317658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.317694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.317914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.317950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.318209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.318241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.318441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.318474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.318631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.318667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.318872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.318904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.319052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.319101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.319299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.319331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.319552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.319588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.319762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.319798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.319998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.320033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.320228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.320260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.320448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.320485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.320697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.320746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.321008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.321064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.321286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.321318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.321532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.321567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.321724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.321760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.321963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.322022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.322218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.322250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.322471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.322507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.322712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.322747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.322974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.323006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.323186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.323218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.323416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.323458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.323614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.323649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.323924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.323981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.324198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.324230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.324464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.324496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.324676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.324709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.325031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.325095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.325292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.325326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.910 qpair failed and we were unable to recover it. 00:36:52.910 [2024-07-10 14:39:02.325564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.910 [2024-07-10 14:39:02.325596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.325783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.325815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.326002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.326038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.326227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.326259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.326423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.326465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.326674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.326706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.326861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.326893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.327097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.327133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.327306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.327341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.327520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.327563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.327732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.327764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.327944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.327977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.328127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.328160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.328354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.328389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.328602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.328638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.328810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.328842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.329013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.329044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.329226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.329258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.329451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.329488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.329664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.329696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.329869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.329902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.330102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.330137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.330326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.330361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.330558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.330590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.330741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.330773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.330962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.330997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.331164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.331200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.331401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.331439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.331642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.331677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.331841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.331876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.332068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.332100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.332273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.332304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.332447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.332479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.332652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.911 [2024-07-10 14:39:02.332684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.911 qpair failed and we were unable to recover it. 00:36:52.911 [2024-07-10 14:39:02.332965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.333023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.333222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.333255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.333450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.333486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.333718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.333750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.334008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.334065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.334235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.334266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.334420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.334457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.334640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.334675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.334930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.334988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.335221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.335253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.335453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.335488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.335705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.335740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.336063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.336123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.336323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.336359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.336560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.336609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.336769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.336804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.336971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.337007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.337171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.337203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.337379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.337414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.337586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.337622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.337793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.337828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.338030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.338063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.338267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.338303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.338465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.338502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.338668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.338703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.338925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.338957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.339148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.339184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.339409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.339450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.339650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.339687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.339912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.339946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.340153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.340191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.340341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.340373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.340572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.340605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.340781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.340815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.341017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.341053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.341257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.341306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.341520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.341565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.341775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.341807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.341976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.342012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.912 [2024-07-10 14:39:02.342210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.912 [2024-07-10 14:39:02.342258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.912 qpair failed and we were unable to recover it. 00:36:52.913 [2024-07-10 14:39:02.342417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.913 [2024-07-10 14:39:02.342462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.913 qpair failed and we were unable to recover it. 00:36:52.913 [2024-07-10 14:39:02.342686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.913 [2024-07-10 14:39:02.342718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.913 qpair failed and we were unable to recover it. 00:36:52.913 [2024-07-10 14:39:02.342890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.913 [2024-07-10 14:39:02.342931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.913 qpair failed and we were unable to recover it. 00:36:52.913 [2024-07-10 14:39:02.343150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.913 [2024-07-10 14:39:02.343193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.913 qpair failed and we were unable to recover it. 00:36:52.913 [2024-07-10 14:39:02.343447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.913 [2024-07-10 14:39:02.343484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:52.913 qpair failed and we were unable to recover it. 00:36:53.191 [2024-07-10 14:39:02.343710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.191 [2024-07-10 14:39:02.343742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.191 qpair failed and we were unable to recover it. 00:36:53.191 [2024-07-10 14:39:02.343976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.191 [2024-07-10 14:39:02.344012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.191 qpair failed and we were unable to recover it. 00:36:53.191 [2024-07-10 14:39:02.344207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.191 [2024-07-10 14:39:02.344243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.191 qpair failed and we were unable to recover it. 00:36:53.191 [2024-07-10 14:39:02.344445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.191 [2024-07-10 14:39:02.344478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.191 qpair failed and we were unable to recover it. 00:36:53.191 [2024-07-10 14:39:02.344626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.191 [2024-07-10 14:39:02.344658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.191 qpair failed and we were unable to recover it. 00:36:53.191 [2024-07-10 14:39:02.344843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.191 [2024-07-10 14:39:02.344875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.191 qpair failed and we were unable to recover it. 00:36:53.191 [2024-07-10 14:39:02.345019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.191 [2024-07-10 14:39:02.345051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.191 qpair failed and we were unable to recover it. 00:36:53.191 [2024-07-10 14:39:02.345253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.191 [2024-07-10 14:39:02.345286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.191 qpair failed and we were unable to recover it. 00:36:53.191 [2024-07-10 14:39:02.345478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.191 [2024-07-10 14:39:02.345516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.191 qpair failed and we were unable to recover it. 00:36:53.191 [2024-07-10 14:39:02.345670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.191 [2024-07-10 14:39:02.345707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.191 qpair failed and we were unable to recover it. 00:36:53.191 [2024-07-10 14:39:02.345920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.191 [2024-07-10 14:39:02.345951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.191 qpair failed and we were unable to recover it. 00:36:53.191 [2024-07-10 14:39:02.346136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.191 [2024-07-10 14:39:02.346170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.191 qpair failed and we were unable to recover it. 00:36:53.191 [2024-07-10 14:39:02.346378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.191 [2024-07-10 14:39:02.346419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.191 qpair failed and we were unable to recover it. 00:36:53.191 [2024-07-10 14:39:02.346666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.191 [2024-07-10 14:39:02.346702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.191 qpair failed and we were unable to recover it. 00:36:53.191 [2024-07-10 14:39:02.346920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.191 [2024-07-10 14:39:02.346956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.191 qpair failed and we were unable to recover it. 00:36:53.191 [2024-07-10 14:39:02.347202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.191 [2024-07-10 14:39:02.347239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.191 qpair failed and we were unable to recover it. 00:36:53.191 [2024-07-10 14:39:02.347466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.347499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.347699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.347733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.347879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.347932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.348217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.348250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.348475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.348519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.348701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.348738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.348911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.348947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.349147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.349191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.349368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.349400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.349607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.349642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.349864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.349900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.350148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.350180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.350354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.350386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.350560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.350593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.350778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.350810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.350985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.351017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.351187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.351219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.351420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.351464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.351632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.351668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.351862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.351898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.352085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.352118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.352313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.352349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.352555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.352588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.352788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.352838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.353010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.353042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.353240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.353278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.353469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.353505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.353673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.353709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.353881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.353913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.354061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.354093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.354282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.354318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.354492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.354529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.354703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.354739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.354949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.354985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.355151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.355187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.355384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.355418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.355567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.355598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.355748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.355780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.355920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.355952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.356156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.192 [2024-07-10 14:39:02.356192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.192 qpair failed and we were unable to recover it. 00:36:53.192 [2024-07-10 14:39:02.356389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.356421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.356631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.356667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.356877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.356919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.357146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.357192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.357384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.357417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.357622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.357657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.357895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.357930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.358284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.358337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.358567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.358600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.358775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.358811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.359005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.359041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.359343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.359407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.359585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.359618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.359814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.359850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.360080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.360112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.360308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.360344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.360552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.360585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.360724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.360756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.360931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.360963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.361178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.361229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.361434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.361468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.361687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.361726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.361934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.361967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.362161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.362197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.362400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.362445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.362644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.362680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.362895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.362931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.363155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.363210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.363404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.363443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.363637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.363673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.363837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.363872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.364196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.364253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.364483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.364522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.364701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.364736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.364909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.364944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.365165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.365201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.365368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.365399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.365576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.365612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.365774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.365809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.365976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.366012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.193 qpair failed and we were unable to recover it. 00:36:53.193 [2024-07-10 14:39:02.366234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.193 [2024-07-10 14:39:02.366266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.366479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.366515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.366736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.366768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.366968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.367003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.367206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.367238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.367414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.367453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.367652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.367688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.367986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.368050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.368270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.368302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.368507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.368543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.368736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.368771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.369126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.369181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.369371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.369413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.369598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.369630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.369811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.369846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.370039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.370076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.370265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.370296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.370500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.370536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.370697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.370732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.370936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.370972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.371175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.371207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.371387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.371418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.371643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.371679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.371944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.371999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.372195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.372227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.372391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.372436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.372634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.372681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.372877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.372915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.373113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.373145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.373320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.373352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.373505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.373555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.373738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.373771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.373974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.374010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.374242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.374274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.374451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.374484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.374696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.374735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.374962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.374993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.375193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.375225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.375395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.375437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.375619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.375665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.194 [2024-07-10 14:39:02.375860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.194 [2024-07-10 14:39:02.375892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.194 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.376094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.376131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.376352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.376384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.376561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.376593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.376769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.376801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.376990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.377026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.377252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.377289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.377519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.377555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.377759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.377792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.377986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.378023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.378218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.378250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.378456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.378507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.378707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.378740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.378914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.378950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.379138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.379174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.379364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.379400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.379612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.379644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.379796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.379828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.379996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.380047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.380245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.195 [2024-07-10 14:39:02.380560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.380608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.380816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.380851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.381027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.381077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.381312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.381362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.381515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.381548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.381723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.381756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.381958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.382011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.382315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.382367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.382600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.382635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.382926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.382964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.383159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.383197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.383417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.383478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.383633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.383665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.383892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.195 [2024-07-10 14:39:02.383925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.195 qpair failed and we were unable to recover it. 00:36:53.195 [2024-07-10 14:39:02.384130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.384166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.384330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.384368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.384606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.384639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.384866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.384903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.385193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.385251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.385532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.385566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.385763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.385800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.386000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.386037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.386233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.386271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.386512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.386545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.386695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.386756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.386926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.386977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.387149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.387204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.387454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.387518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.387683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.387718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.387947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.387983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.388267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.388303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.388506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.388539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.388738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.388774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.389041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.389079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.389378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.389443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.389617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.389649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.389967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.390025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.390242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.390302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.390510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.390543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.390697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.390754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.390957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.390990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.391258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.391294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.391533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.391566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.391769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.391801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.391998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.392035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.392322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.392380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.392570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.392602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.392800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.392837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.393104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.393141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.393368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.393405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.393587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.393619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.393848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.393900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.394142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.394192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.394374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.394414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.196 qpair failed and we were unable to recover it. 00:36:53.196 [2024-07-10 14:39:02.394600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.196 [2024-07-10 14:39:02.394633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.394863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.394895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.395049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.395099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.395329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.395365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.395551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.395584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.395744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.395777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.395967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.396002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.396372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.396454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.396677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.396709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.396909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.396965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.397335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.397400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.397611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.397644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.397855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.397899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.398101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.398133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.398327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.398363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.398615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.398649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.398812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.398846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.399064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.399101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.399293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.399329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.399555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.399588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.399764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.399796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.399948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.399999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.400198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.400230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.400418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.400475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.400655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.400687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.400870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.400912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.401153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.401186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.401360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.401393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.401573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.401605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.401779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.401811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.402088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.402145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.402353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.402385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.402575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.402608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.402812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.402848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.403019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.403051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.403265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.403300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.403496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.403530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.403706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.403738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.403916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.403951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.404148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.404183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.404348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.197 [2024-07-10 14:39:02.404380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.197 qpair failed and we were unable to recover it. 00:36:53.197 [2024-07-10 14:39:02.404542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.404574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.404723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.404755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.404963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.404996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.405167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.405203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.405364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.405400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.405609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.405642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.405817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.405849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.406026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.406059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.406265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.406297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.406511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.406544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.406749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.406802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.407047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.407089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.407280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.407314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.407467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.407501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.407704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.407737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.407952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.407989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.408261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.408309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.408491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.408524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.408681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.408733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.408911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.408949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.409150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.409183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.409364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.409400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.409603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.409636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.409831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.409863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.410083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.410119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.410346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.410381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.410622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.410655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.410867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.410905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.411217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.411274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.411476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.411509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.411666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.411698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.411946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.411997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.412187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.412221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.412404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.412448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.412626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.412659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.412873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.412905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.413070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.413105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.413324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.413360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.413549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.413581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.413728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.198 [2024-07-10 14:39:02.413777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.198 qpair failed and we were unable to recover it. 00:36:53.198 [2024-07-10 14:39:02.414034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.414070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.414262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.414294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.414533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.414566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.414707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.414756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.414937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.414969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.415170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.415206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.415442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.415495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.415703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.415736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.415967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.416003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.416267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.416305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.416534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.416567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.416772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.416813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.417025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.417058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.417237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.417269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.417420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.417457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.417639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.417673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.417880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.417912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.418107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.418142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.418343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.418377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.418542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.418576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.418795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.418831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.419069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.419101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.419303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.419335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.419490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.419523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.419698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.419732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.419916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.419949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.420120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.420155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.420383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.420420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.420656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.420688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.420896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.420932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.421147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.421182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.421355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.421388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.421612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.421644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.421869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.421901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.422091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.422123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.422324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.422360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.422596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.422629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.422783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.422816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.423018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.423054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.199 [2024-07-10 14:39:02.423277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.199 [2024-07-10 14:39:02.423313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.199 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.423517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.423550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.423767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.423803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.424085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.424123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.424329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.424363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.424541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.424573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.424717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.424750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.424890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.424922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.425123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.425159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.425419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.425471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.425682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.425717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.425888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.425924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.426135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.426172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.426350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.426382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.426543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.426575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.426909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.426975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.427174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.427206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.427408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.427465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.427661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.427693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.427917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.427948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.428100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.428132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.428351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.428387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.428566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.428599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.428773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.428838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.429081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.429138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.429359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.429391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.429586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.429621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.429980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.430035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.430294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.430325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.430535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.430569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.430756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.430807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.431034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.431068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.431260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.431304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.431512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.431544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.431730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.431764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.200 qpair failed and we were unable to recover it. 00:36:53.200 [2024-07-10 14:39:02.432002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.200 [2024-07-10 14:39:02.432034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.432325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.432362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.432536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.432569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.432763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.432799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.432998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.433034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.433257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.433289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.433499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.433531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.433715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.433766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.433943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.433975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.434208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.434243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.434438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.434475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.434695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.434727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.434979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.435012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.435193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.435225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.435431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.435464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.435610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.435642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.435924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.435960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.436153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.436189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.436393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.436439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.436644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.436676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.436937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.436969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.437250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.437285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.437489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.437523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.437776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.437808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.437994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.438030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.438263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.438295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.438480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.438512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.438664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.438713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.438986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.439022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.439247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.439279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.439499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.439532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.439684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.439716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.439908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.439940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.440122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.440154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.440358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.440391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.440607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.440640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.440843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.440879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.441166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.441202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.201 [2024-07-10 14:39:02.441382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.201 [2024-07-10 14:39:02.441418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.201 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.441608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.441641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.441855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.441909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.442092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.442127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.442304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.442340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.442582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.442616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.442823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.442856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.443050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.443085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.443255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.443290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.443509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.443541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.443760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.443795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.444121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.444180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.444406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.444443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.444605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.444636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.444840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.444875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.445037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.445069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.445245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.445277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.445437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.445473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.445658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.445690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.445877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.445914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.446176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.446232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.446449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.446482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.446639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.446672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.446821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.446864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.447081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.447114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.447314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.447350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.447531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.447564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.202 qpair failed and we were unable to recover it. 00:36:53.202 [2024-07-10 14:39:02.447762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.202 [2024-07-10 14:39:02.447794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.447991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.448026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.448199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.448234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.448458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.448491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.448663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.448712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.448899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.448935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.449163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.449195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.449399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.449441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.449617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.449650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.449800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.449833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.450016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.450048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.450199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.450230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.450385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.450417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.450625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.450657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.450889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.450935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.451123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.451156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.451392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.451433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.451637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.451669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.451850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.451883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.452061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.452096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.452299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.452335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.452519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.452562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.452716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.452748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.452902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.452934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.453114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.453146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.453359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.453391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.203 [2024-07-10 14:39:02.453582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.203 [2024-07-10 14:39:02.453614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.203 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.453824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.453855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.454050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.454085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.454286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.454318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.454536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.454569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.454770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.454802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.455010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.455052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.455278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.455311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.455509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.455545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.455752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.455789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.455970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.456003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.456222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.456257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.456497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.456529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.456709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.456742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.456985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.457017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.457197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.457229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.457401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.457441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.457644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.457679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.457847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.457883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.458110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.458142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.458416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.458462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.458653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.458689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.458859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.458891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.459070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.459102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.459268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.459304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.459495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.459527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.459682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.459715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.204 [2024-07-10 14:39:02.459915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.204 [2024-07-10 14:39:02.459965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.204 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.460195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.460226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.460434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.460470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.460643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.460679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.460880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.460912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.461084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.461120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.461344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.461376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.461562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.461604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.461802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.461837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.462037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.462073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.462278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.462310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.462528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.462564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.462790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.462825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.463027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.463059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.463240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.463273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.463476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.463512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.463700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.463732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.463924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.463959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.464114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.464150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.464317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.464353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.464520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.464555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.464741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.464777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.464944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.464976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.205 [2024-07-10 14:39:02.465155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.205 [2024-07-10 14:39:02.465186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.205 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.465385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.465438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.465620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.465652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.465850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.465885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.466080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.466116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.466347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.466379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.466578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.466613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.466816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.466848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.467021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.467053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.467250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.467285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.467459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.467497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.467702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.467735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.467873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.467905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.468080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.468112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.468328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.468360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.468543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.468579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.468751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.468787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.468946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.468977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.469147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.469184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.469419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.469457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.469642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.469674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.469904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.469939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.470134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.470171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.206 [2024-07-10 14:39:02.470389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.206 [2024-07-10 14:39:02.470421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.206 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.470642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.470678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.470863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.470900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.471099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.471131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.471333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.471368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.471600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.471632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.471787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.471820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.472035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.472070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.472232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.472267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.472498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.472530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.472725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.472757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.472910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.472962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.473135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.473167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.473359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.473399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.473619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.473651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.473840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.473872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.474064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.474100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.474306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.474342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.474548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.474581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.474765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.474797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.474987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.475019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.475212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.475244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.475465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.475501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.475693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.475729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.475941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.475973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.476152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.476196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.207 [2024-07-10 14:39:02.476409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.207 [2024-07-10 14:39:02.476453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.207 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.476661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.476694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.476852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.476884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.477080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.477116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.477314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.477346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.477559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.477596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.477788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.477823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.478027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.478061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.478267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.478303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.478489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.478525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.478720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.478752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.478952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.478988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.479181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.479218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.479396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.479442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.479652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.479684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.479867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.479906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.480109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.480141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.480340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.480376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.480584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.480617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.480795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.480829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.481067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.481103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.481277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.481313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.481488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.481521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.481681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.481718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.481894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.481929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.482096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.482128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.482320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.482355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.208 [2024-07-10 14:39:02.482551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.208 [2024-07-10 14:39:02.482591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.208 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.482793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.482825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.483015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.483051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.483270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.483306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.483493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.483526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.483681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.483716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.483934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.483970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.484196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.484228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.484370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.484403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.484586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.484622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.484821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.484853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.485084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.485120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.485317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.485353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.485559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.485592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.485744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.485805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.486032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.486068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.486247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.486279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.486459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.486491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.486689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.486734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.486907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.486940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.487130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.487165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.487365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.487401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.487611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.487644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.487869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.487905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.488072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.488108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.488299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.488331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.488531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.488567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.209 [2024-07-10 14:39:02.488801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.209 [2024-07-10 14:39:02.488837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.209 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.489055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.489087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.489259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.489295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.489498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.489534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.489758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.489801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.489968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.490004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.490173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.490209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.490390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.490435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.490637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.490673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.490858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.490906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.491084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.491116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.491309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.491346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.491542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.491574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.491759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.491795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.491968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.492003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.492221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.492256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.492488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.492521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.492699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.492747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.492968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.493003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.493202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.493236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.493465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.493501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.493672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.493720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.493886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.493918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.494114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.494149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.494316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.494352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.494558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.494591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.494784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.494826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.210 qpair failed and we were unable to recover it. 00:36:53.210 [2024-07-10 14:39:02.495069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.210 [2024-07-10 14:39:02.495105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.495337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.495369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.495570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.495604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.495809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.495844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.496045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.496079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.496301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.496337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.496541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.496577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.496800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.496832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.497034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.497070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.497257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.497292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.497493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.497525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.497728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.497764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.497991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.498023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.498204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.498237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.498400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.498445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.498629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.498665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.498867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.498905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.499080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.499116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.499323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.499355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.499538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.499570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.499753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.499793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.500001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.500037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.500261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.500293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.500538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.500574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.500783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.500815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.501021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.501053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.501275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.501316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.501544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.501580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.501810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.501842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.502046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.502082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.502273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.502308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.502490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.502523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.502719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.502754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.502982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.503014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.503162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.211 [2024-07-10 14:39:02.503194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.211 qpair failed and we were unable to recover it. 00:36:53.211 [2024-07-10 14:39:02.503420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.503463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.503628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.503666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.503897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.503930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.504098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.504135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.504362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.504398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.504591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.504624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.504790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.504826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.504988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.505026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.505233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.505265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.505498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.505535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.505702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.505743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.505946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.505988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.506191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.506241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.506403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.506453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.506622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.506654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.506829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.506865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.507059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.507091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.507291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.507323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.507525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.507565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.507757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.507795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.508020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.508052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.508253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.508291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.508496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.508533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.508715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.508747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.508896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.508928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.509167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.509202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.509398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.509443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.509637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.509673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.509888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.509923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.510092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.510125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.510321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.510357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.212 qpair failed and we were unable to recover it. 00:36:53.212 [2024-07-10 14:39:02.510601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.212 [2024-07-10 14:39:02.510637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.510844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.510877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.511053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.511085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.511290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.511325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.511504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.511537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.511757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.511802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.511997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.512032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.512229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.512261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.512464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.512499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.512691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.512734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.512959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.512991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.513192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.513227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.513421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.513468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.513675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.513718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.513939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.513975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.514196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.514231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.514441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.514473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.514693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.514734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.514885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.514921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.515111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.515144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.515353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.515388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.515583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.515615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.515762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.515795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.515990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.516026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.516245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.516280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.516458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.516490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.213 qpair failed and we were unable to recover it. 00:36:53.213 [2024-07-10 14:39:02.516636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.213 [2024-07-10 14:39:02.516669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.516860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.516898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.517097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.517129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.517294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.517329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.517497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.517533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.517726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.517758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.517956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.517992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.518184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.518221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.518445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.518478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.518679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.518715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.518935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.518970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.519208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.519240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.519449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.519482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.519686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.519722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.519951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.519983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.520142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.520174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.520341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.520373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.520549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.520582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.520734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.520776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.520966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.521001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.521172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.521204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.521384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.521417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.521620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.521655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.521827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.521861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.522054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.522090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.522318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.214 [2024-07-10 14:39:02.522350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.214 qpair failed and we were unable to recover it. 00:36:53.214 [2024-07-10 14:39:02.522565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.522598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.522809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.522841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.523024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.523056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.523232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.523264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.523443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.523475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.523619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.523652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.523833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.523867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.524047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.524079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.524273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.524308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.524520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.524553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.524792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.524829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.525046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.525081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.525265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.525297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.525444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.525476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.525658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.525690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.525922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.525958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.526158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.526207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.526449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.526485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.526664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.526696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.526885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.526921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.527138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.527174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.527393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.527432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.527609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.527641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.527820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.527852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.528031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.528063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.528234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.215 [2024-07-10 14:39:02.528269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.215 qpair failed and we were unable to recover it. 00:36:53.215 [2024-07-10 14:39:02.528470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.528508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.528711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.528743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.528961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.528996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.529219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.529255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.529490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.529522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.529744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.529781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.529969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.530005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.530190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.530222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.530445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.530481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.530651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.530688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.530858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.530890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.531065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.531097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.531281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.531316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.531528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.531561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.531729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.531765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.531969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.532007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.532234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.532266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.532508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.532540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.532720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.532752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.532905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.532937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.533143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.533179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.533334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.533369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.533581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.533613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.533763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.533795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.533940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.533972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.534152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.216 [2024-07-10 14:39:02.534184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.216 qpair failed and we were unable to recover it. 00:36:53.216 [2024-07-10 14:39:02.534415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.534467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.534624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.534660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.534829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.534861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.535063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.535105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.535267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.535314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.535539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.535572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.535740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.535775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.535964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.536000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.536189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.536221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.536444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.536481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.536676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.536709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.536914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.536946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.537144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.537180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.537349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.537387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.537624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.537657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.537827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.537862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.538036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.538072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.538302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.538335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.538534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.538570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.538768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.538803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.538979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.539013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.539222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.539254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.539453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.539490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.539697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.539729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.539932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.217 [2024-07-10 14:39:02.539968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.217 qpair failed and we were unable to recover it. 00:36:53.217 [2024-07-10 14:39:02.540145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.218 [2024-07-10 14:39:02.540176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.218 qpair failed and we were unable to recover it. 00:36:53.218 [2024-07-10 14:39:02.540377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.218 [2024-07-10 14:39:02.540409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.218 qpair failed and we were unable to recover it. 00:36:53.218 [2024-07-10 14:39:02.540581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.218 [2024-07-10 14:39:02.540617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.218 qpair failed and we were unable to recover it. 00:36:53.218 [2024-07-10 14:39:02.540844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.218 [2024-07-10 14:39:02.540876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.218 qpair failed and we were unable to recover it. 00:36:53.218 [2024-07-10 14:39:02.541020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.218 [2024-07-10 14:39:02.541052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.218 qpair failed and we were unable to recover it. 00:36:53.218 [2024-07-10 14:39:02.541228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.218 [2024-07-10 14:39:02.541264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.218 qpair failed and we were unable to recover it. 00:36:53.218 [2024-07-10 14:39:02.541464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.218 [2024-07-10 14:39:02.541497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.218 qpair failed and we were unable to recover it. 00:36:53.218 [2024-07-10 14:39:02.541676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.218 [2024-07-10 14:39:02.541708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.218 qpair failed and we were unable to recover it. 00:36:53.218 [2024-07-10 14:39:02.541940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.218 [2024-07-10 14:39:02.541975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.218 qpair failed and we were unable to recover it. 00:36:53.218 [2024-07-10 14:39:02.542202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.218 [2024-07-10 14:39:02.542234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.218 qpair failed and we were unable to recover it. 00:36:53.218 [2024-07-10 14:39:02.542404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.218 [2024-07-10 14:39:02.542446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.218 qpair failed and we were unable to recover it. 00:36:53.218 [2024-07-10 14:39:02.542618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.218 [2024-07-10 14:39:02.542656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.218 qpair failed and we were unable to recover it. 00:36:53.218 [2024-07-10 14:39:02.542845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.218 [2024-07-10 14:39:02.542881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.218 qpair failed and we were unable to recover it. 00:36:53.218 [2024-07-10 14:39:02.543059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.218 [2024-07-10 14:39:02.543090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.218 qpair failed and we were unable to recover it. 00:36:53.218 [2024-07-10 14:39:02.543264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.218 [2024-07-10 14:39:02.543296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.218 qpair failed and we were unable to recover it. 00:36:53.218 [2024-07-10 14:39:02.543466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.218 [2024-07-10 14:39:02.543502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.218 qpair failed and we were unable to recover it. 00:36:53.218 [2024-07-10 14:39:02.543705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.218 [2024-07-10 14:39:02.543737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.218 qpair failed and we were unable to recover it. 00:36:53.218 [2024-07-10 14:39:02.543906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.218 [2024-07-10 14:39:02.543944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.218 qpair failed and we were unable to recover it. 00:36:53.218 [2024-07-10 14:39:02.544148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.218 [2024-07-10 14:39:02.544188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.218 qpair failed and we were unable to recover it. 00:36:53.218 [2024-07-10 14:39:02.544384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.218 [2024-07-10 14:39:02.544416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.218 qpair failed and we were unable to recover it. 00:36:53.218 [2024-07-10 14:39:02.544583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.544620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.544808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.544843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.545070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.545103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.545304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.545342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.545574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.545607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.545799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.545831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.546040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.546072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.546288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.546324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.546534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.546567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.546741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.546773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.546982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.547017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.547208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.547241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.547386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.547418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.547625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.547657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.547859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.547891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.548111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.548146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.548354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.548386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.548595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.548628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.548872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.548904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.549114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.549147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.549322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.549354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.549561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.549597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.549834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.549878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.550083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.550125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.550288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.550325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.550499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.550536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.550739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.550771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.219 [2024-07-10 14:39:02.550996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.219 [2024-07-10 14:39:02.551032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.219 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.551235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.551270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.551470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.551503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.551700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.551736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.551938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.551973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.552183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.552215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.552396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.552432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.552649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.552684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.552886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.552918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.553061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.553111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.553340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.553372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.553595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.553632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.553817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.553852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.554053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.554085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.554272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.554306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.554501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.554539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.554767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.554804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.555008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.555040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.555242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.555277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.555472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.555508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.555708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.555740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.555912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.555944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.556139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.556175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.556353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.556385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.556588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.556624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.556808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.556840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.557022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.557054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-07-10 14:39:02.557278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.220 [2024-07-10 14:39:02.557314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.557543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.557579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.557805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.557842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.558041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.558077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.558273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.558305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.558482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.558524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.558726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.558762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.558940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.558975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.559183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.559216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.559415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.559457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.559658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.559694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.559867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.559899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.560066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.560101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.560293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.560329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.560523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.560555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.560767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.560801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.560996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.561032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.561237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.561269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.561510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.561542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.561708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.561740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.561963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.561996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.562150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.562201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.562407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.562446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.562625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.562657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.562890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.562929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.563163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.563195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.563341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.563373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.563540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.221 [2024-07-10 14:39:02.563572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-07-10 14:39:02.563790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.563827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.564008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.564040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.564228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.564264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.564457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.564491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.564694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.564727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.564908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.564954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.565142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.565177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.565374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.565417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.565650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.565682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.565920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.565956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.566183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.566215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.566391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.566434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.566614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.566648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.566880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.566912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.567090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.567122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.567288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.567339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.567527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.567559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.567714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.567746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.567907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.567943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.568135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.568167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.568354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.568390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.568632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.568665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.568820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.568852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.569021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.569054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.569225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.569262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.569438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.569471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.569671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.569715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.569914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.569950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.570120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.570152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.570353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.570389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.570609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.570641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.570815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.570847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.571072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.571107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.571317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.222 [2024-07-10 14:39:02.571354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-07-10 14:39:02.571579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.571613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.571770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.571809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.571996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.572032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.572213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.572246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.572449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.572484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.572675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.572711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.572894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.572927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.573082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.573118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.573311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.573347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.573581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.573614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.573841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.573873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.574055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.574088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.574290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.574322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.574461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.574494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.574647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.574697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.574907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.574939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.575143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.575179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.575337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.575373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.575594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.575629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.575843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.575876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.576041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.576073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.576282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.576314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.576527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.576560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.576741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.576793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.576987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.577019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.577195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.577231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.577429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.577466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.577634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.577666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.577882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.577918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.578101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.578134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.578315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.578347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.578531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.578564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.578760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.578798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.578985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.579017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.579218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.579253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.579451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.579498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.579699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.579742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.579940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.579976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.580196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.580228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.580432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.223 [2024-07-10 14:39:02.580465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.223 qpair failed and we were unable to recover it. 00:36:53.223 [2024-07-10 14:39:02.580630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.580666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.580841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.580873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.581046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.581082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.581279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.581315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.581545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.581581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.581791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.581823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.582001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.582036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.582201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.582237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.582449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.582482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.582685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.582721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.582912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.582948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.583122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.583155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.583330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.583362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.583568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.583604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.583814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.583846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.584053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.584103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.584304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.584340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.584538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.584571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.584784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.584816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.585035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.585071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.585255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.585287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.585484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.585520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.585734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.585766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.585914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.585947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.586141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.586177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.586380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.586421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.586608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.586641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.586871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.586906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.587105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.587165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.587379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.587423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.587605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.587640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.587828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.587864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.588081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.224 [2024-07-10 14:39:02.588112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.224 qpair failed and we were unable to recover it. 00:36:53.224 [2024-07-10 14:39:02.588295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.588331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.588523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.588560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.588792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.588824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.589049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.589085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.589276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.589312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.589515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.589547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.589721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.589757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.589983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.590019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.590197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.590229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.590451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.590492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.590685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.590720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.590926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.590958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.591177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.591213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.591420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.591460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.591636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.591675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.591879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.591915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.592090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.592126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.592354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.592386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.592575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.592611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.592808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.592844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.593071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.593103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.593298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.593334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.593511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.593547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.593754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.593787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.594001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.594034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.594230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.594265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.594485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.594527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.594686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.594719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.594890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.594922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.595100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.595132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.595329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.595364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.595579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.595611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.595787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.595819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.595995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.596027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.596232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.596281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.596467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.596500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.596681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.596717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.596931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.596964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.597139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.597171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.597402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.225 [2024-07-10 14:39:02.597452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.225 qpair failed and we were unable to recover it. 00:36:53.225 [2024-07-10 14:39:02.597646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.597682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.597890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.597922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.598144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.598179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.598383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.598417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.598643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.598676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.598909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.598945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.599159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.599195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.599358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.599390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.599572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.599608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.599807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.599847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.600052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.600084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.600254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.600290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.600475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.600511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.600701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.600740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.600984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.601016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.601168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.601200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.601343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.601375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.601569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.601617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.601792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.601827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.602032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.602064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.602271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.602303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.602445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.602493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.602690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.602729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.602938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.602973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.603159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.603195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.603422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.603460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.603632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.603668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.603867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.603903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.604071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.604103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.604274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.604310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.604509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.604545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.604740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.604773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.604971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.605007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.605246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.605278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.605422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.605459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.605603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.605635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.605813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.605846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.606053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.606085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.606257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.606292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.226 qpair failed and we were unable to recover it. 00:36:53.226 [2024-07-10 14:39:02.606492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.226 [2024-07-10 14:39:02.606528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.606732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.606764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.606942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.606976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.607216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.607248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.607398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.607440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.607634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.607670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.607901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.607937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.608137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.608169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.608370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.608406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.608640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.608677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1554735 Killed "${NVMF_APP[@]}" "$@" 00:36:53.227 [2024-07-10 14:39:02.608872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.608904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 14:39:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:53.227 [2024-07-10 14:39:02.609099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.609147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 14:39:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:53.227 [2024-07-10 14:39:02.609380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.609414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 14:39:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:53.227 14:39:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:53.227 [2024-07-10 14:39:02.609621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.609654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 14:39:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:53.227 [2024-07-10 14:39:02.609899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.609932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.610070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.610120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.610319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.610351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.610502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.610535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.610689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.610723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.610875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.610907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.611070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.611106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.611336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.611372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.611603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.611635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.611840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.611876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.612039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.612075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.612252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.612284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.612438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.612490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.612683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.612718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.612900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.612932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.613115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.613149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 [2024-07-10 14:39:02.613321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.613357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 14:39:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1555410 00:36:53.227 [2024-07-10 14:39:02.613537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.613570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 14:39:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 14:39:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1555410 00:36:53.227 [2024-07-10 14:39:02.613751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.613783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 14:39:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1555410 ']' 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 14:39:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:53.227 [2024-07-10 14:39:02.613970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 [2024-07-10 14:39:02.614010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.227 14:39:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:53.227 [2024-07-10 14:39:02.614212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.227 14:39:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:53.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:53.227 [2024-07-10 14:39:02.614244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.227 qpair failed and we were unable to recover it. 00:36:53.228 14:39:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:53.228 [2024-07-10 14:39:02.614414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.614462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 14:39:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:53.228 [2024-07-10 14:39:02.614637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.614675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.614877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.614910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.615108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.615144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.615321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.615353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.615531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.615570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.615745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.615781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.615949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.615986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.616177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.616209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.616438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.616475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.616705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.616741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.616934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.616966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.617158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.617194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.617429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.617465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.617641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.617675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.617862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.617898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.618088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.618124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.618294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.618327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.618530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.618566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.618722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.618758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.618984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.619017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.619185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.619217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.619381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.619416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.619634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.619666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.619844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.619877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.620046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.620078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.620228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.620261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.620436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.620473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.620636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.620673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.620871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.620903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.621110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.621142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.621282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.621325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.621491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.228 [2024-07-10 14:39:02.621525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.228 qpair failed and we were unable to recover it. 00:36:53.228 [2024-07-10 14:39:02.621718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.621755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.621931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.621967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.622133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.622171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.622405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.622447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.622663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.622699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.622910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.622943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.623179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.623212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.623391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.623439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.623623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.623656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.623807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.623849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.624036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.624072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.624245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.624279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.624437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.624470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.624624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.624674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.624913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.624945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.625107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.625143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.625337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.625373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.625555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.625587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.625740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.625789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.625980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.626017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.626222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.626255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.626399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.626437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.626613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.626645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.626858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.626890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.627037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.627069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.627263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.627299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.627521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.627554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.627734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.627771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.627962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.627997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.628195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.628227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.628391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.628434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.628602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.628640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.628841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.628874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.629060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.629092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.629242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.629293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.629466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.629498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.629694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.629729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.629922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.629957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.630152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.630184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.229 qpair failed and we were unable to recover it. 00:36:53.229 [2024-07-10 14:39:02.630399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.229 [2024-07-10 14:39:02.630441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.630623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.630655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.630830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.630861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.631063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.631117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.631313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.631349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.631560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.631593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.631784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.631819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.632012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.632049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.632214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.632246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.632469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.632506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.632700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.632735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.632928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.632961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.633190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.633226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.633415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.633457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.633658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.633689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.633850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.633901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.634098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.634133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.634336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.634368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.634563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.634598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.634762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.634797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.634973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.635005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.635210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.635245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.635453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.635489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.635692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.635724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.635927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.635975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.636183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.636216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.636366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.636398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.636580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.636628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.636826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.636861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.637077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.637110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.637293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.637329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.637509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.637545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.637744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.637786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.637938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.637971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.638160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.638195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.638389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.638422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.638633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.638669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.638833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.638870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.639096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.639128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.639323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.230 [2024-07-10 14:39:02.639358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.230 qpair failed and we were unable to recover it. 00:36:53.230 [2024-07-10 14:39:02.639534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.639571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.639763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.639795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.639959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.639994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.640191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.640232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.640436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.640468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.640666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.640718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.640918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.640953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.641151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.641184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.641335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.641366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.641546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.641579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.641757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.641790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.641961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.641993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.642148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.642181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.642324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.642356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.642501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.642552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.642771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.642806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.643016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.643048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.643247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.643284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.643505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.643541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.643719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.643753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.643915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.643948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.644137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.644173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.644347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.644380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.644575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.644611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.644767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.644803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.645035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.645067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.645268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.645304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.645507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.645539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.645692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.645724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.645933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.645968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.646170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.646206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.646432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.646464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.646641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.646677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.646877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.646913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.647108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.647140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.647309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.647347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.647544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.647578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.647779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.647812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.648002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.648038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.648208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.648249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.231 qpair failed and we were unable to recover it. 00:36:53.231 [2024-07-10 14:39:02.648422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.231 [2024-07-10 14:39:02.648461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.232 qpair failed and we were unable to recover it. 00:36:53.232 [2024-07-10 14:39:02.648662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.232 [2024-07-10 14:39:02.648699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.232 qpair failed and we were unable to recover it. 00:36:53.232 [2024-07-10 14:39:02.648887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.232 [2024-07-10 14:39:02.648922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.232 qpair failed and we were unable to recover it. 00:36:53.232 [2024-07-10 14:39:02.649122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.232 [2024-07-10 14:39:02.649161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.232 qpair failed and we were unable to recover it. 00:36:53.232 [2024-07-10 14:39:02.649336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.232 [2024-07-10 14:39:02.649378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.232 qpair failed and we were unable to recover it. 00:36:53.232 [2024-07-10 14:39:02.649542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.232 [2024-07-10 14:39:02.649575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.232 qpair failed and we were unable to recover it. 00:36:53.232 [2024-07-10 14:39:02.649777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.232 [2024-07-10 14:39:02.649810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.232 qpair failed and we were unable to recover it. 00:36:53.232 [2024-07-10 14:39:02.650006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.232 [2024-07-10 14:39:02.650041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.232 qpair failed and we were unable to recover it. 00:36:53.232 [2024-07-10 14:39:02.650291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.232 [2024-07-10 14:39:02.650327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.232 qpair failed and we were unable to recover it. 00:36:53.232 [2024-07-10 14:39:02.650550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.232 [2024-07-10 14:39:02.650583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.232 qpair failed and we were unable to recover it. 00:36:53.232 [2024-07-10 14:39:02.650749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.232 [2024-07-10 14:39:02.650797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.232 qpair failed and we were unable to recover it. 00:36:53.232 [2024-07-10 14:39:02.651015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.232 [2024-07-10 14:39:02.651057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.232 qpair failed and we were unable to recover it. 00:36:53.232 [2024-07-10 14:39:02.651264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.232 [2024-07-10 14:39:02.651297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.232 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.651504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.651540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.651735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.651771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.651942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.651974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.652204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.652251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.652452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.652504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.652692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.652731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.652931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.652975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.653158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.653190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.653354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.653392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.653586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.653622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.653870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.653919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.654118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.654151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.654329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.654376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.654572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.654609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.654783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.654816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.654968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.655002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.655157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.655191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.655382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.655417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.655613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.655647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.655824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.655857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.656027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.656059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.656244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.656278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.656434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.656478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.656644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.656676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.656891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.656925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.657117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.657152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.657293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.657325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.657487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.657533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.657723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.657757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.657939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.657973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.658178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.512 [2024-07-10 14:39:02.658219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.512 qpair failed and we were unable to recover it. 00:36:53.512 [2024-07-10 14:39:02.658414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.658464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.658624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.658658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.658847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.658881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.659066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.659104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.659288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.659320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.659543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.659592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.659788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.659824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.659976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.660011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.660164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.660200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.660368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.660401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.660594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.660629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.660821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.660854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.661037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.661069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.661230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.661263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.661451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.661485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.661658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.661692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.661874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.661908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.662086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.662120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.662303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.662336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.662497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.662530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.662684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.662729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.662915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.662948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.663158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.663192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.663366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.663399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.663582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.663615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.663764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.663798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.663987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.664022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.664175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.664209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.664359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.664394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.664587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.664620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.664807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.664842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.665020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.665054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.665230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.665263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.665476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.665511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.665712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.665747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.665930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.665993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.666173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.666208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.666381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.666420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.666626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.666660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.666806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.513 [2024-07-10 14:39:02.666846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.513 qpair failed and we were unable to recover it. 00:36:53.513 [2024-07-10 14:39:02.667024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.667057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.667262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.667295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.667508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.667542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.667713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.667747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.667969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.668001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.668198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.668231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.668412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.668452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.668628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.668662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.668870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.668902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.669094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.669126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.669494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.669528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.669731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.669764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.669947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.669980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.670188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.670238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.670468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.670502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.670661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.670695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.670867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.670901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.671079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.671112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.671288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.671322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.671495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.671528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.671732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.671766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.671983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.672016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.672273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.672306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.672505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.672538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.672722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.672757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.672981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.673017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.673255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.673292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.673459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.673492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.673643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.673676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.673866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.673899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.674111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.674159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.674431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.674465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.674644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.674679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.674936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.674967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.675283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.675340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.675554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.675591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.675806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.675838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.676027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.676060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.676260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.676293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.676443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.514 [2024-07-10 14:39:02.676481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.514 qpair failed and we were unable to recover it. 00:36:53.514 [2024-07-10 14:39:02.676733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.676767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.676976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.677013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.677197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.677229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.677396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.677446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.677595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.677628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.678013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.678049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.678241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.678275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.678459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.678494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.678675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.678708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.678898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.678931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.679115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.679149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.679347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.679396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.679643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.679679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.679887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.679924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.680159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.680208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.680388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.680421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.680628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.680661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.680828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.680863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.681087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.681119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.681338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.681381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.681603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.681637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.681840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.681873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.682058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.682091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.682344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.682421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.682640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.682676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.682915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.682966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.683260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.683299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.683536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.683570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.683788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.683821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.684055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.684092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.684261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.684297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.684511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.684544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.684726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.684764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.684974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.685026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.685191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.685242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.685394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.685442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.685629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.685662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.685861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.685913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.686063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.686095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.515 [2024-07-10 14:39:02.686267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.515 [2024-07-10 14:39:02.686305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.515 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.686484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.686517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.686694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.686752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.686981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.687032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.687214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.687249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.687397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.687446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.687683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.687734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.687980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.688030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.688206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.688257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.688442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.688475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.688651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.688717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.688913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.688965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.689165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.689216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.689364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.689398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.689634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.689688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.689906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.689957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.690213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.690264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.690447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.690510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.690689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.690741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.690971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.691020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.691217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.691250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.691404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.691452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.691646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.691698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.691864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.691916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.692176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.692210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.692391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.692430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.692659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.692712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.692952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.693003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.693236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.693286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.693492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.693544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.693720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.693782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.693954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.694003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.694212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.694246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.694420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.516 [2024-07-10 14:39:02.694459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.516 qpair failed and we were unable to recover it. 00:36:53.516 [2024-07-10 14:39:02.694664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.694715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.694954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.695009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.695208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.695258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.695454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.695488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.695657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.695716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.695949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.696000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.696177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.696214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.696422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.696482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.696701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.696743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.696945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.696996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.697173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.697206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.697386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.697420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.697632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.697683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.697898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.697949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.698153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.698185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.698366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.698399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.698619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.698670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.698871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.698923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.699151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.699202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.699346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.699381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.699641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.699692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.699871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.699922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.700102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.700136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.700284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.700317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.700553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.700620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.700839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.700878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.701109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.701146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.701376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.701408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.701569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.701605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.701835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.701808] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:36:53.517 [2024-07-10 14:39:02.701873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.701938] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:53.517 [2024-07-10 14:39:02.702232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.702268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.702484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.702529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.702762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.702799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.703067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.703121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.703466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.703518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.703697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.703751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.704014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.704053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.704249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.704285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.704460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.704510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.704710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.517 [2024-07-10 14:39:02.704758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.517 qpair failed and we were unable to recover it. 00:36:53.517 [2024-07-10 14:39:02.705002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.705056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.705251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.705302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.705465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.705500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.705687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.705727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.705988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.706042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.706362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.706452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.706684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.706737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.706948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.706984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.707260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.707315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.707511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.707544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.707723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.707771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.707980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.708034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.708269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.708320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.708534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.708569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.708842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.708910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.709119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.709171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.709376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.709418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.709638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.709685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.709921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.709960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.710259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.710312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.710505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.710541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.710696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.710729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.710957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.711014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.711263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.711322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.711526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.711561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.711786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.711822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.712120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.712156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.712374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.712407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.712569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.712601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.712903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.712961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.713159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.713195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.713374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.713406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.713573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.713606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.713796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.713832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.714140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.714198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.714389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.714436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.714637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.714671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.714961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.715035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.715422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.518 [2024-07-10 14:39:02.715495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.518 qpair failed and we were unable to recover it. 00:36:53.518 [2024-07-10 14:39:02.715670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.715703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.715965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.716001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.716280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.716335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.716552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.716601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.716835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.716881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.717148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.717232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.717483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.717539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.717783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.717830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.718115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.718182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.718397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.718461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.718666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.718700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.718932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.718984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.719167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.719219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.719400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.719452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.719605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.719639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.719879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.719930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.720139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.720190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.720340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.720374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.720605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.720659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.720833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.720893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.721105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.721157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.721344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.721379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.721593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.721644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.721866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.721918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.722089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.722141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.722323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.722356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.722551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.722604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.722802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.722854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.723059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.723109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.723307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.723340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.723542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.723593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.723786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.723838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.724052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.724104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.724313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.724346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.724552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.724606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.724805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.724857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.725033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.725084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.725261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.725294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.725528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.725580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.725776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.519 [2024-07-10 14:39:02.725828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.519 qpair failed and we were unable to recover it. 00:36:53.519 [2024-07-10 14:39:02.726076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.726128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.726374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.726451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.726653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.726689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.726917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.726968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.727118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.727153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.727337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.727370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.727561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.727618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.727845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.727897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.728112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.728152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.728348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.728384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.728604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.728637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.728900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.728940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.729113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.729151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.729371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.729404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.729593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.729625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.729848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.729900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.730278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.730318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.730538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.730572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.730799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.730835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.731086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.731140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.731343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.731379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.731564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.731598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.731815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.731862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.732075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.732129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.732300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.732355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.732547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.732581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.732813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.732868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.733116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.733171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.733385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.733418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.733584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.733626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.733861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.733911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.734106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.734157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.734338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.734371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.734550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.734584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.520 [2024-07-10 14:39:02.734827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.520 [2024-07-10 14:39:02.734878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.520 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.735124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.735175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.735380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.735412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.735607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.735641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.735857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.735907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.736114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.736167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.736318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.736352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.736571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.736622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.737242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.737279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.737482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.737536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.737733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.737784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.737958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.738007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.738210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.738248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.738450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.738500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.738702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.738751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.739119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.739171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.739322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.739355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.739552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.739603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.739838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.739890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.740067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.740106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.740462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.740525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.740743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.740782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.740953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.740989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.741161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.741199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.741434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.741467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.741621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.741654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.741886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.741938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.742203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.742263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.742469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.742520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.742721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.742772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.742976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.743027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.743200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.743233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.743386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.743421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.743625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.743676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.743871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.743905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.744147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.744199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.744413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.744464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.744680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.744732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.744916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.744954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.745198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.521 [2024-07-10 14:39:02.745252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.521 qpair failed and we were unable to recover it. 00:36:53.521 [2024-07-10 14:39:02.745460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.745493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.745704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.745741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.745926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.745977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.746187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.746221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.746445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.746498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.746657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.746690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.746843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.746875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.747147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.747183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.747418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.747462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.747661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.747694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.747965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.748029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.748248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.748284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.748524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.748564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.748745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.748801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.749083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.749135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.749343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.749381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.749591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.749625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.749824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.749857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.750062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.750097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.750270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.750306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.750523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.750556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.750764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.750801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.751050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.751110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.751483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.751517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.751669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.751701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.751912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.751949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.752331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.752395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.752608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.752641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.752904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.752940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.753154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.753190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.753390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.753432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.753637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.753670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.753850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.753883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.754052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.754089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.754284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.754319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.754501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.754534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.754759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.754795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.755026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.755084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.755307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.755343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.755535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.755568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.522 qpair failed and we were unable to recover it. 00:36:53.522 [2024-07-10 14:39:02.755783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.522 [2024-07-10 14:39:02.755835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.756068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.756102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.756332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.756382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.756619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.756653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.756842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.756875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.757047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.757084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.757312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.757349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.757579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.757613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.757831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.757867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.758105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.758162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.758392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.758433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.758617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.758649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.758904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.758943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.759285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.759354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.759534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.759567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.759733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.759768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.759965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.759998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.760172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.760209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.760377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.760415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.760622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.760670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.760909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.760962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.761300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.761366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.761541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.761577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.761808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.761860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.762059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.762111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.762308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.762360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.762581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.762615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.762966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.763039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.763353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.763412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.763627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.763660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.763852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.763905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.764114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.764166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.764385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.764420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.764648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.764701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.764901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.764954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.765132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.765170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.765376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.765415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.765603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.765636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.765873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.765909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.766186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.766254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.523 qpair failed and we were unable to recover it. 00:36:53.523 [2024-07-10 14:39:02.766463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.523 [2024-07-10 14:39:02.766499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.766717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.766754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.767053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.767108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.767419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.767463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.767626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.767658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.767872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.767905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.768202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.768259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.768490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.768524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.768673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.768723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.768915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.768950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.769127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.769177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.769352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.769385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.769616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.769654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.769830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.769867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.770125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.770182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.770451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.770501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.770660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.770693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.770899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.770948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.771171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.771217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.771411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.771470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.771641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.771674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.771938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.771974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.772134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.772170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.772368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.772404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.772600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.772633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.772820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.772856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.773112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.773148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.773371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.773417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.773638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.773670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.773861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.773909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.774214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.774250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.774453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.774487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.774667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.774718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.774924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.774958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.775125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.775163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.775355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.775393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.775578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.775611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.775819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.775855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.776064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.776095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.776256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.776289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.524 [2024-07-10 14:39:02.776510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.524 [2024-07-10 14:39:02.776546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.524 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.776742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.776777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.776978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.777011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.777185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.777223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.777376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.777418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.777625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.777657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.777876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.777921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.778133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.778165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.778369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.778401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.778587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.778624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.778818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.778855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.779086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.779119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.779345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.779382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.779542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.779574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.779811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.779844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.780046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.780083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.780299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.780335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.780562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.780595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.780799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.780832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.781006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.781057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.781233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.781265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.781429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.781465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.781656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.781692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.781862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.781896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.782052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.782084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.782256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.782288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.782504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.782537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.782707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.782743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.782913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.782949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.783121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.783154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.783338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.783370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.783540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.783573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.783722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.783763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.783959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.783995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.784202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.784244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.525 [2024-07-10 14:39:02.784421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.525 [2024-07-10 14:39:02.784468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.525 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.784633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.784670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.784892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.784928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.785127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.785159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.785342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.785378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.785608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.785640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.785823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.785856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.786050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.786086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.786245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.786281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.786487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.786519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.786718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.786754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.786959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.786991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.787143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.787176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.787332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.787364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.787560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.787593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.787740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.787772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.787912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.787944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.788153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.788190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.788366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.788399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.788571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.788607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 EAL: No free 2048 kB hugepages reported on node 1 00:36:53.526 [2024-07-10 14:39:02.788807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.788840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.789008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.789041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.789254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.789289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.789515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.789551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.789774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.789806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.789976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.790008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.790206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.790242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.790420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.790459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.790650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.790685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.790876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.790913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.791076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.791107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.791274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.791311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.791464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.791502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.791708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.791741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.791939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.791975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.792135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.792171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.792346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.792378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.792580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.792613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.792795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.792828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.793033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.793066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.793215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.526 [2024-07-10 14:39:02.793247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.526 qpair failed and we were unable to recover it. 00:36:53.526 [2024-07-10 14:39:02.793448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.793481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.793693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.793725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.793951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.793984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.794162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.794194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.794371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.794403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.794581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.794613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.794787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.794825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.795019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.795051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.795223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.795259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.795458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.795491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.795694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.795736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.795933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.795969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.796146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.796178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.796360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.796392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.796565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.796599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.796819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.796851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.797031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.797068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.797245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.797279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.797494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.797532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.797728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.797760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.797973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.798005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.798230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.798265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.798489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.798534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.798741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.798777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.798979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.799015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.799196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.799228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.799415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.799453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.799662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.799697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.799907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.799939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.800140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.800175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.800354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.800392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.800595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.800630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.800860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.800897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.801070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.801120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.801322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.801354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.801507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.801540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.801711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.801743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.801889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.801922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.802110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.802143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.802317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.527 [2024-07-10 14:39:02.802349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.527 qpair failed and we were unable to recover it. 00:36:53.527 [2024-07-10 14:39:02.802518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.802552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.802730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.802763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.802940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.802972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.803149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.803181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.803379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.803418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.803603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.803636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.803857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.803889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.804076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.804108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.804256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.804288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.804471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.804504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.804679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.804711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.804853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.804885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.805061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.805093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.805244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.805277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.805421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.805457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.805602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.805637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.805825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.805861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.806016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.806048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.806223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.806256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.806458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.806491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.806649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.806682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.806846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.806878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.807057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.807090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.807258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.807290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.807432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.807465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.807617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.807649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.807798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.807830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.808016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.808048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.808249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.808281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.808456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.808489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.808654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.808687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.808842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.808875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.809029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.809062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.809235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.809267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.809446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.809478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.809629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.809661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.809831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.809863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.810004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.810036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.810182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.810214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.810389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.810422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.528 [2024-07-10 14:39:02.810599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.528 [2024-07-10 14:39:02.810632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.528 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.810801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.810833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.811013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.811046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.811231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.811264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.811442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.811476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.811631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.811663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.811850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.811893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.812069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.812101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.812279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.812311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.812483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.812516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.812687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.812730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.812902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.812934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.813108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.813140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.813323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.813355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.813567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.813602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.813790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.813827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.814002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.814038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.814197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.814229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.814401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.814446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.814659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.814691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.814876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.814908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.815104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.815136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.815282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.815314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.815495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.815528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.815694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.815726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.815920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.815952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.816137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.816169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.816346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.816378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.816603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.816635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.816818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.816850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.817033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.817066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.817210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.817243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.817420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.817458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.817635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.817668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.817851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.529 [2024-07-10 14:39:02.817882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.529 qpair failed and we were unable to recover it. 00:36:53.529 [2024-07-10 14:39:02.818063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.818095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.818247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.818281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.818449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.818487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.818642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.818675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.818889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.818921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.819124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.819157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.819331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.819364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.819582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.819614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.819787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.819834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.820000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.820034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.820217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.820250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.820395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.820448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.820598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.820630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.820853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.820887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.821069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.821113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.821264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.821297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.821485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.821518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.821673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.821714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.821899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.821932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.822121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.822154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.822309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.822341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.822503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.822542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.822726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.822758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.822930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.822962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.823168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.823200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.823373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.823405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.823598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.823630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.823787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.823818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.823996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.824029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.824209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.824241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.824429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.824462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.824614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.824648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.824827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.824860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.825033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.825065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.825233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.825265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.825472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.825505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.825652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.825684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.825838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.825870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.826041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.826073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.826245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.530 [2024-07-10 14:39:02.826278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.530 qpair failed and we were unable to recover it. 00:36:53.530 [2024-07-10 14:39:02.826439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.826473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.826623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.826655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.826840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.826873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.827045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.827078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.827256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.827301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.827487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.827520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.827700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.827739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.827950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.827982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.828132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.828164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.828307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.828339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.828547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.828594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.828777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.828811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.828966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.829001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.829177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.829210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.829417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.829458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.829610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.829643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.829825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.829857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.830056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.830089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.830266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.830299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.830481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.830515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.830672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.830705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.830915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.830952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.831131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.831163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.831329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.831362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.831548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.831580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.831736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.831768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.831944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.831976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.832129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.832161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.832366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.832398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.832577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.832633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.832827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.832862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.833009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.833043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.833223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.833256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.833461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.833496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.833697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.833730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.833900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.833933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.834138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.834186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.834352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.834387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.834617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.834664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.834850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.834884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.531 qpair failed and we were unable to recover it. 00:36:53.531 [2024-07-10 14:39:02.835099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.531 [2024-07-10 14:39:02.835133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.835315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.835347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.835509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.835543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.835741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.835788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.835959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.835993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.836171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.836205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.836363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.836396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.836600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.836648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.836850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.836897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.837088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.837124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.837304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.837337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.837515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.837550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.837770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.837817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.838002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.838036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.838215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.838248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.838398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.838438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.838637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.838684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.838872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.838906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.839050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.839083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.839232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.839265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.839463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.839512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.839733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.839785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.839978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.840013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.840224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.840258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.840463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.840496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.840689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.840736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.840936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.840971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.841124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.841156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.841344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.841378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.841545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.841579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.841779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.841827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.842034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.842081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.842243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.842278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.842489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.842523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.842703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.842737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.842918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.842966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.843158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.843192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.843372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.843406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.843589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.843622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.843797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.843830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.532 qpair failed and we were unable to recover it. 00:36:53.532 [2024-07-10 14:39:02.844005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.532 [2024-07-10 14:39:02.844038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.844242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.844274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.844525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.844572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.844785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.844834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.845004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.845039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.845188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.845221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.845403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.845444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.845607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.845640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.845848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.845896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.846063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.846097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.846300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.846333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.846514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.846548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.846726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.846773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.846983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.847018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.847168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.847201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.847392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.847431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.847593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.847641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.847808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.847845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.848022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.848056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.848213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.848247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.848402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.848443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.848640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.848693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.848879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.848914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.849091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.849125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.849275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.849309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.849487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.849521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.849695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.849742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.849966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.850001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.850154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.850188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.850342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.850374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.850557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.850590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.850756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.850803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.850960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.851005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.851164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.851198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.851379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.533 [2024-07-10 14:39:02.851420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.533 qpair failed and we were unable to recover it. 00:36:53.533 [2024-07-10 14:39:02.851635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.851667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.851824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.851858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.852107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.852140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.852296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.852330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.852524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.852572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.852798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.852832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.852992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.853024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.853205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.853238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.853422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.853461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.853638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.853670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.853815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.853847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.854032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.854064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.854265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.854297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.854499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.854545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.854723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.854771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.855035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.855082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.855245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.855281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.855465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.855500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.855662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.855698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.855879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.855920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.856092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.856124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.856305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.856337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.856527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.856575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.856745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.856791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.856984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.857019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.857092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:53.534 [2024-07-10 14:39:02.857204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.857237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.857382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.857420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.857584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.857617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.857762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.857794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.857970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.858005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.858208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.858241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.858423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.858466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.858652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.858686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.858889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.858922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.859107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.859140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.859311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.859344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.859527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.859560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.859757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.859803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.860016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.860050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.860195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.534 [2024-07-10 14:39:02.860228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.534 qpair failed and we were unable to recover it. 00:36:53.534 [2024-07-10 14:39:02.860386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.860418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.860636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.860683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.860868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.860902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.861058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.861091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.861274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.861306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.861493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.861527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.861685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.861718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.861862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.861895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.862077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.862109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.862264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.862297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.862501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.862535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.862685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.862719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.862896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.862929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.863115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.863152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.863348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.863397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.863621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.863669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.863904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.863940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.864113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.864147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.864299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.864332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.864533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.864570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.864791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.864839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.865008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.865044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.865225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.865259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.865514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.865548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.865752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.865784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.865960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.865993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.866195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.866232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.866406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.866447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.866651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.866684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.866873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.866905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.867081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.867126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.867273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.867306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.867470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.867505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.867679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.867728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.867945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.867981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.868156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.868189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.868347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.868381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.868589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.868638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.868819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.868866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.869054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.869088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.535 [2024-07-10 14:39:02.869301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.535 [2024-07-10 14:39:02.869333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.535 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.869507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.869541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.869689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.869734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.869923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.869959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.870166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.870201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.870382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.870419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.870594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.870628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.870791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.870826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.871045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.871078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.871284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.871318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.871518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.871566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.871784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.871831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.872022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.872056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.872269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.872303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.872484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.872518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.872677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.872715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.872881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.872913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.873081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.873113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.873309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.873341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.873534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.873566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.873740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.873772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.873957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.873990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.874135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.874166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.874342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.874374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.874586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.874633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.874854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.874901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.875058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.875098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.875275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.875308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.875499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.875533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.875728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.875776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.875938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.875973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.876126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.876161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.876359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.876393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.876577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.876624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.876774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.876815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.877009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.877044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.877226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.877258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.877448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.877486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.877674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.877718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.877864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.877901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.878064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.536 [2024-07-10 14:39:02.878097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.536 qpair failed and we were unable to recover it. 00:36:53.536 [2024-07-10 14:39:02.878237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.878269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.878467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.878528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.878713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.878748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.878930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.878964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.879142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.879175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.879350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.879383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.879600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.879637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.879812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.879847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.880002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.880034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.880237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.880269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.880419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.880459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.880643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.880676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.880869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.880902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.881084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.881117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.881343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.881391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.881573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.881620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.881836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.881870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.882046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.882079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.882224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.882257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.882442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.882475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.882660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.882693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.882885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.882932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.883121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.883155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.883343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.883375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.883537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.883570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.883768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.883821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.884038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.884084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.884287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.884323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.884501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.884537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.884736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.884778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.885051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.885098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.885261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.885297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.885474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.885508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.885655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.885688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.885871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.885904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.886083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.886116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.886269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.886302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.886509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.886556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.886748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.886784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.886976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.887010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.537 [2024-07-10 14:39:02.887191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.537 [2024-07-10 14:39:02.887224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.537 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.887367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.887399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.887605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.887652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.887845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.887879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.888056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.888093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.888273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.888305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.888483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.888517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.888706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.888753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.888957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.888992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.889172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.889206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.889383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.889432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.889584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.889617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.889815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.889862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.890051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.890086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.890264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.890298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.890473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.890507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.890656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.890690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.890912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.890945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.891125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.891158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.891354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.891401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.891639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.891675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.891827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.891860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.892032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.892064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.892226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.892273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.892498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.892546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.892740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.892780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.892934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.892969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.893211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.893244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.893421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.893460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.893611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.893646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.893875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.893930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.894091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.894125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.894332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.894365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.894532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.894565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.894718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.894750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.538 [2024-07-10 14:39:02.894895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.538 [2024-07-10 14:39:02.894927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.538 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.895126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.895158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.895335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.895367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.895542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.895590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.895787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.895822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.896003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.896036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.896215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.896248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.896452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.896484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.896697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.896744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.896962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.896997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.897172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.897205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.897421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.897460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.897638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.897685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.897966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.898001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.898182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.898215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.898391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.898422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.898592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.898624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.898782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.898830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.899037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.899069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.899243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.899275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.899437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.899469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.899626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.899659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.899810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.899843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.900024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.900056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.900224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.900258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.900436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.900470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.900651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.900683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.900827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.900859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.901019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.901056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.901233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.901266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.901446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.901485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.901637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.901670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.901822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.901854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.902032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.902064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.902243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.902275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.902436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.902469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.902616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.902648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.902793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.902859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.903069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.903101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.903275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.903307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.903481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.539 [2024-07-10 14:39:02.903514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.539 qpair failed and we were unable to recover it. 00:36:53.539 [2024-07-10 14:39:02.903712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.903773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.903969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.904003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.904208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.904241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.904422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.904461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.904638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.904685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.904914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.904962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.905150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.905185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.905365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.905400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.905596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.905629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.905809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.905856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.906067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.906101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.906255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.906287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.906461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.906494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.906665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.906697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.906873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.906905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.907059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.907092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.907245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.907277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.907442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.907489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.907676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.907726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.907919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.907953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.908126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.908158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.908372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.908405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.908600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.908633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.908808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.908841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.909036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.909084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.909297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.909332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.909522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.909557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.909740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.909774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.909980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.910012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.910190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.910228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.910407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.910449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.910621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.910667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.910863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.910910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.911088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.911122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.911333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.911366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.911529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.911562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.911772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.911820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.911982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.912017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.912172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.912206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.912377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.912410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.540 [2024-07-10 14:39:02.912600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.540 [2024-07-10 14:39:02.912636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.540 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.912793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.912826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.913002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.913035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.913222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.913255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.913435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.913468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.913623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.913656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.913874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.913907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.914069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.914116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.914281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.914315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.914561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.914617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.914781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.914827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.914984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.915018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.915198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.915232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.915391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.915433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.915640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.915687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.915878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.915913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.916137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.916171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.916353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.916385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.916601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.916634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.916816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.916848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.917052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.917085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.917261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.917293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.917462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.917510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.917686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.917734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.917917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.917952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.918127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.918160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.918335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.918368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.918525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.918559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.918760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.918794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.918977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.919016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.919195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.919227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.919413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.919457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.919638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.919672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.919824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.919873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.920091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.920124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.920276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.920309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.920477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.920523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.920743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.920790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.920953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.920987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.921160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.921193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.921366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.541 [2024-07-10 14:39:02.921398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.541 qpair failed and we were unable to recover it. 00:36:53.541 [2024-07-10 14:39:02.921575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.921622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.921821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.921857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.922050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.922085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.922270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.922304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.922508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.922542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.922689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.922722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.922923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.922956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.923138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.923172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.923313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.923345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.923516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.923564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.923747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.923782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.923989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.924036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.924225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.924259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.924405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.924446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.924598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.924630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.924812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.924847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.925026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.925058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.925242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.925275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.925419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.925458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.925623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.925670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.925831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.925867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.926012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.926045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.926250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.926282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.926462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.926496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.926668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.926701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.926848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.926881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.927076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.927109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.927307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.927354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.927565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.927618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.927848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.927895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.928085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.928120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.928270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.928304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.928480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.928514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.928658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.928691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.928844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.542 [2024-07-10 14:39:02.928877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.542 qpair failed and we were unable to recover it. 00:36:53.542 [2024-07-10 14:39:02.929032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.929065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.929243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.929276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.929449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.929481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.929679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.929736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.929926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.929961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.930121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.930155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.930334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.930367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.930586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.930620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.930837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.930871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.931047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.931081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.931287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.931322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.931494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.931528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.931690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.931732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.931876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.931908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.932106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.932139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.932317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.932351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.932519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.932553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.932720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.932766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.932948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.932982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.933192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.933226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.933409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.933451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.933624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.933671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.933833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.933868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.934019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.934053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.934260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.934293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.934471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.934506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.934746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.934792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.935009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.935043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.935259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.935292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.935473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.935506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.935689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.935731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.935961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.936021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.936190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.936225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.936398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.936445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.936635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.936670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.936854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.936888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.937067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.937101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.937284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.937318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.937492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.937525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.937724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.543 [2024-07-10 14:39:02.937773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.543 qpair failed and we were unable to recover it. 00:36:53.543 [2024-07-10 14:39:02.937956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.937992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.938175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.938209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.938387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.938422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.938613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.938647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.938836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.938882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.939093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.939128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.939312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.939346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.939539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.939572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.939732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.939766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.939967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.940000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.940176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.940209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.940390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.940430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.940611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.940644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.940794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.940833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.941014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.941046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.941198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.941231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.941395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.941443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.941622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.941654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.941806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.941839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.941980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.942012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.942178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.942227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.942435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.942473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.942644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.942691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.942873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.942907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.943112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.943157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.943308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.943340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.943502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.943535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.943713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.943745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.943906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.943941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.944145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.944180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.944339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.944372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.944528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.944561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.944709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.944753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.944933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.944971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.945153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.945186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.945349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.945383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.945562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.945610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.945808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.945843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.946049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.946083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.946276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.544 [2024-07-10 14:39:02.946310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.544 qpair failed and we were unable to recover it. 00:36:53.544 [2024-07-10 14:39:02.946529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.946564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.946726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.946759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.946982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.947016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.947163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.947197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.947382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.947431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.947615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.947649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.947842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.947877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.948918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.948966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.949180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.949215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.949371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.949406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.950141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.950175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.950472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.950506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.950669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.950703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.950892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.950924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.951109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.951142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.951324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.951357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.951523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.951556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.951736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.951769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.951958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.951991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.952161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.952210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.952397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.952437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.952618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.952665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.952838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.952872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.953023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.953056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.953209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.953241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.953399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.953445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.953618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.953651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.953879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.953913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.954119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.954152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.954305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.954337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.954494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.954527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.954676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.954708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.954866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.954912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.955057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.955099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.955283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.955315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.955506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.955539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.955698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.955733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.955911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.955943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.956094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.956126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.956294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.956333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.956501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.956535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.956685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.956721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.545 [2024-07-10 14:39:02.956901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.545 [2024-07-10 14:39:02.956936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.545 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.957080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.957114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.957288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.957321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.957499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.957532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.957700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.957748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.957978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.958014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.958209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.958255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.958467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.958501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.958658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.958691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.958887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.958920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.959097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.959131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.959308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.959341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.959521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.959558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.959785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.959832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.960020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.960065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.960281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.960314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.960494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.960528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.960708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.960750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.960897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.960934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.961111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.961143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.961345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.961376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.961575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.961608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.961767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.961802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.961960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.961994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.962153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.962186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.962363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.962396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.962558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.962592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.962738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.962771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.962939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.962973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.546 [2024-07-10 14:39:02.963119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.546 [2024-07-10 14:39:02.963152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.546 qpair failed and we were unable to recover it. 00:36:53.547 [2024-07-10 14:39:02.963300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.547 [2024-07-10 14:39:02.963333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.547 qpair failed and we were unable to recover it. 00:36:53.547 [2024-07-10 14:39:02.963495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.547 [2024-07-10 14:39:02.963529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.547 qpair failed and we were unable to recover it. 00:36:53.547 [2024-07-10 14:39:02.963708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.547 [2024-07-10 14:39:02.963750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.547 qpair failed and we were unable to recover it. 00:36:53.547 [2024-07-10 14:39:02.963923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.547 [2024-07-10 14:39:02.963956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.547 qpair failed and we were unable to recover it. 00:36:53.547 [2024-07-10 14:39:02.964145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.547 [2024-07-10 14:39:02.964177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.547 qpair failed and we were unable to recover it. 00:36:53.547 [2024-07-10 14:39:02.964348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.547 [2024-07-10 14:39:02.964380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.547 qpair failed and we were unable to recover it. 00:36:53.547 [2024-07-10 14:39:02.964534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.547 [2024-07-10 14:39:02.964567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.547 qpair failed and we were unable to recover it. 00:36:53.547 [2024-07-10 14:39:02.964770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.547 [2024-07-10 14:39:02.964809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.547 qpair failed and we were unable to recover it. 00:36:53.547 [2024-07-10 14:39:02.964965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.547 [2024-07-10 14:39:02.965004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.547 qpair failed and we were unable to recover it. 00:36:53.547 [2024-07-10 14:39:02.965177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.547 [2024-07-10 14:39:02.965210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.547 qpair failed and we were unable to recover it. 00:36:53.547 [2024-07-10 14:39:02.965359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.547 [2024-07-10 14:39:02.965391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.547 qpair failed and we were unable to recover it. 00:36:53.547 [2024-07-10 14:39:02.965557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.547 [2024-07-10 14:39:02.965591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.547 qpair failed and we were unable to recover it. 00:36:53.547 [2024-07-10 14:39:02.965762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.547 [2024-07-10 14:39:02.965818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.547 qpair failed and we were unable to recover it. 00:36:53.547 [2024-07-10 14:39:02.966032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.547 [2024-07-10 14:39:02.966067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.547 qpair failed and we were unable to recover it. 00:36:53.547 [2024-07-10 14:39:02.966237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.547 [2024-07-10 14:39:02.966271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.547 qpair failed and we were unable to recover it. 00:36:53.548 [2024-07-10 14:39:02.966464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.548 [2024-07-10 14:39:02.966499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.548 qpair failed and we were unable to recover it. 00:36:53.548 [2024-07-10 14:39:02.966671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.548 [2024-07-10 14:39:02.966704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.548 qpair failed and we were unable to recover it. 00:36:53.548 [2024-07-10 14:39:02.966919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.548 [2024-07-10 14:39:02.966952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.548 qpair failed and we were unable to recover it. 00:36:53.548 [2024-07-10 14:39:02.967105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.548 [2024-07-10 14:39:02.967138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.548 qpair failed and we were unable to recover it. 00:36:53.548 [2024-07-10 14:39:02.967319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.548 [2024-07-10 14:39:02.967367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.548 qpair failed and we were unable to recover it. 00:36:53.548 [2024-07-10 14:39:02.967581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.548 [2024-07-10 14:39:02.967629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.548 qpair failed and we were unable to recover it. 00:36:53.548 [2024-07-10 14:39:02.967846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.548 [2024-07-10 14:39:02.967892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.548 qpair failed and we were unable to recover it. 00:36:53.548 [2024-07-10 14:39:02.968065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.548 [2024-07-10 14:39:02.968099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.548 qpair failed and we were unable to recover it. 00:36:53.548 [2024-07-10 14:39:02.968299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.548 [2024-07-10 14:39:02.968332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.548 qpair failed and we were unable to recover it. 00:36:53.548 [2024-07-10 14:39:02.968530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.548 [2024-07-10 14:39:02.968563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.548 qpair failed and we were unable to recover it. 00:36:53.548 [2024-07-10 14:39:02.968751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.548 [2024-07-10 14:39:02.968784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.548 qpair failed and we were unable to recover it. 00:36:53.548 [2024-07-10 14:39:02.968965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.548 [2024-07-10 14:39:02.968997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.548 qpair failed and we were unable to recover it. 00:36:53.548 [2024-07-10 14:39:02.969179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.548 [2024-07-10 14:39:02.969213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.548 qpair failed and we were unable to recover it. 00:36:53.548 [2024-07-10 14:39:02.969434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.548 [2024-07-10 14:39:02.969472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.548 qpair failed and we were unable to recover it. 00:36:53.548 [2024-07-10 14:39:02.969660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.548 [2024-07-10 14:39:02.969693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.548 qpair failed and we were unable to recover it. 00:36:53.548 [2024-07-10 14:39:02.969918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.548 [2024-07-10 14:39:02.969951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.548 qpair failed and we were unable to recover it. 00:36:53.548 [2024-07-10 14:39:02.970119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.548 [2024-07-10 14:39:02.970162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.548 qpair failed and we were unable to recover it. 00:36:53.548 [2024-07-10 14:39:02.970338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.548 [2024-07-10 14:39:02.970370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.548 qpair failed and we were unable to recover it. 00:36:53.548 [2024-07-10 14:39:02.970546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.548 [2024-07-10 14:39:02.970578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.548 qpair failed and we were unable to recover it. 00:36:53.548 [2024-07-10 14:39:02.970733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.548 [2024-07-10 14:39:02.970766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.548 qpair failed and we were unable to recover it. 00:36:53.825 [2024-07-10 14:39:02.971800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.825 [2024-07-10 14:39:02.971854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.825 qpair failed and we were unable to recover it. 00:36:53.825 [2024-07-10 14:39:02.972055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.972089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.972258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.972291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.972496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.972529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.972710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.972754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.972909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.972950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.973139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.973172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.973357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.973390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.973569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.973603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.973778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.973810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.973983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.974016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.974201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.974234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.974385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.974432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.974592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.974626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.974862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.974919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.975134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.975169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.975359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.975392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.975611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.975645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.975862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.975894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.976039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.976071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.976236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.976269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.976455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.976489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.976664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.976697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.976885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.976926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.977100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.977133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.977356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.977388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.977554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.977586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.977783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.977832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.977988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.978023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.978178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.978213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.978412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.978461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.978632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.978665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.978847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.978880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.979054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.979092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.979249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.979281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.979487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.979520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.979721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.979760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.979952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.979987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.980148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.980181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.980365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.980398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.980605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.980654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.980853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.980907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.981105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.981141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.981322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.981355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.981526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.981561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.981770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.981804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.981958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.981990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.826 [2024-07-10 14:39:02.982201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.826 [2024-07-10 14:39:02.982235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.826 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.982385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.982438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.982599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.982632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.982792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.982827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.983041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.983074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.983221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.983256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.983438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.983472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.983750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.983788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.984083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.984116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.984293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.984326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.984504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.984538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.984738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.984785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.984979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.985029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.985215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.985249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.985446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.985481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.985638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.985670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.985851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.985884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.986062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.986094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.986270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.986302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.986498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.986531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.986708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.986760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.986956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.986992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.987204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.987238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.987507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.987553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.987716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.987750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.987925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.987982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.988150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.988192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.988381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.988432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.988596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.988630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.988779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.988827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.989030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.989063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.989279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.989321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.989490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.989535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.989683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.989724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.989872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.989905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.990062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.990108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.990256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.990289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.990498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.990545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.990701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.990737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.990932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.990966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.991131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.827 [2024-07-10 14:39:02.991165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.827 qpair failed and we were unable to recover it. 00:36:53.827 [2024-07-10 14:39:02.991440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.991474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.991660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.991693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.991886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.991919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.992101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.992133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.992342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.992376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.992576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.992610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.992798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.992831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.992983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.993016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.993192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.993225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.993387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.993436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.993623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.993673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.993938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.993984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.994282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.994316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.994503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.994536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.994686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.994720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.994910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.994946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.995099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.995148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.995370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.995404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.995579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.995612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.995766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.995807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.995990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.996024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.996205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.996239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.996404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.996454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.996604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.996637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.996840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.996873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.997021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.997058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.997219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.997252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.997401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.997450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.997635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.997668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.997873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.997906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.998057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.998090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.998270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.998303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.998539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.998573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.998745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.998800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.999042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.999074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.999280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.999313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.828 qpair failed and we were unable to recover it. 00:36:53.828 [2024-07-10 14:39:02.999510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.828 [2024-07-10 14:39:02.999544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:02.999703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:02.999740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:02.999901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:02.999935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.000098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.000132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.000324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.000360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.000518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.000553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.000731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.000765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.000937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.000971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.001118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.001152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.001331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.001364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.001548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.001583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.001787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.001820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.002022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.002056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.002197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.002231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.002438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.002487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.002694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.002729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.002961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.002994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.003224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.003257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.003452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.003487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.003642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.003677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.003863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.003896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.004076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.004109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.004282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.004315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.004500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.004534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.004710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.004742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.004921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.004956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.005170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.005204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.005383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.005438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.005601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.005637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.005797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.005834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.006083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.006115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.006308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.006341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.006519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.006553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.006733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.006767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.006941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.006975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.007154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.007202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.007385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.007418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.007581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.007614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.007791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.007823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.008035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.008069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.008212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.008245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.008423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.008463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.008644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.008677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.008866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.008899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.009045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.009078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.009239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.009275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.009456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.009490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.009673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.009706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.009903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.009936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.010138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.010171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.010326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.010359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.010544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.010579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.010771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.010803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.010995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.011030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.011998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.012033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.012261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.012307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.012505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.012543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.012694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.012752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.829 [2024-07-10 14:39:03.012919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.829 [2024-07-10 14:39:03.012951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.829 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.013144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.013178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.013370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.013403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.013607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.013641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.013829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.013863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.014090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.014123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.014286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.014318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.014500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.014533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.014693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.014734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.014938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.014987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.015172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.015206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.015383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.015434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.015619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.015652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.015864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.015896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.016045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.016084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.016271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.016306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.016506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.016540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.016717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.016757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.016951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.016984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.017188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.017221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.017404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.017464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.017617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.017652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.017839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.017871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.018079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.018115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.018292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.018324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.018505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.018539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.018730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.018764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.018955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.018987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.019138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.019170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.019348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.019381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.019571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.019603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.019797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.019830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.019972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.020004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.020185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.020217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.020394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.020443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.020645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.020694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.020935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.020982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.021199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.021234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.021395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.021443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.021598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.021631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.021809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.021841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.022054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.022093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.022277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.022309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.022500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.022533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.022678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.022711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.022893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.022926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.023134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.023168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.023348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.023380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.023580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.023628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.023811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.023846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.024048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.024095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.024283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.024323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.024513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.024548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.024765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.024799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.025016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.025060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.025203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.025236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.025385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.025449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.025645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.025678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.025861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.025903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.026090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.026124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.026310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.026345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.026555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.026589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.026768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.026801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.026991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.027025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.027194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.027226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.027436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.027470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.027673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.027706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.027901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.027934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.028124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.028156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.028313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.028347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.028555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.028588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.028764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.028797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.028984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.029016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.029161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.029193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.029404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.029453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.029626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.029674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.029853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.029890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.830 [2024-07-10 14:39:03.030068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.830 [2024-07-10 14:39:03.030102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.830 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.030261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.030295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.030477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.030511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.030675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.030709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.030864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.030905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.031090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.031123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.031278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.031319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.031503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.031538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.031696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.031739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.031951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.031983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.032171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.032205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.032391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.032432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.032614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.032648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.032830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.032862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.033017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.033055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.033257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.033289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.033441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.033486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.033690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.033749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.033939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.033974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.034157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.034191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.034369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.034401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.034609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.034660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.034870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.034916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.035109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.035146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.035323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.035357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.035529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.035563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.035721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.035754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.035923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.035958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.036117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.036150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.036333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.036368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.036530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.036564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.036758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.036806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.036994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.037029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.037178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.037211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.037358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.037391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.037563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.037598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.037749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.037791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.037946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.037980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.038129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.038162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.038356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.038403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.038606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.038641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.038821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.038868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.039031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.039068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.039222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.039258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.039443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.039477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.039634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.039667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.039878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.039927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.040148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.040181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.040362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.040395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.040574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.040607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.040824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.040857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.041008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.041044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.041296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.041330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.041526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.041559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.041737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.041775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.041956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.041990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.042165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.042199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.042375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.042408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.042591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.042624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.042773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.042811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.042986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.043019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.043194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.043227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.043417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.043458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.043605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.043638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.043832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.043864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.044056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.044089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.044284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.044317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.044468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.044513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.044694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.044746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.044944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.044979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.045201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.045236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.045444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.045478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.045654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.045687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.831 [2024-07-10 14:39:03.045845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.831 [2024-07-10 14:39:03.045877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.831 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.046055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.046088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.046237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.046270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.046430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.046463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.046640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.046673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.046849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.046882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.047062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.047094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.047277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.047313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.047480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.047515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.047670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.047704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.047889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.047924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.048107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.048140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.048292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.048327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.048484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.048519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.048696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.048734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.048924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.048957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.049104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.049138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.049321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.049354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.049515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.049548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.049698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.049731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.049870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.049902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.050049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.050086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.050242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.050277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.050439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.050473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.050654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.050688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.050922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.050954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.051109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.051143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.051296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.051329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.051528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.051575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.051781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.051828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.052025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.052061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.052224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.052258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.052410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.052450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.052601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.052633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.052788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.052823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.053015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.053048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.053226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.053259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.053458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.053492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.053647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.053680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.053882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.053914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.054104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.054136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.054320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.054354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.054522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.054557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.054712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.054769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.055032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.055077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.055226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.055259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.055404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.055454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.055636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.055670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.055938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.055986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.056175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.056210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.056366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.056400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.056614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.056650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.056818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.056854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.057009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.057044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.057248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.057281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.057442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.057476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.057654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.057702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.057902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.057959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.058176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.058209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.058399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.058444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.058629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.058662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.059256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.059293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.059488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.059522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.059699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.059742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.059909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.059942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.060137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.060185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.060368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.060403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.060609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.060656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.060837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.060872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.061023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.061056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.832 qpair failed and we were unable to recover it. 00:36:53.832 [2024-07-10 14:39:03.061212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.832 [2024-07-10 14:39:03.061248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.061421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.061464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.061642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.061675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.061897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.061930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.062133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.062166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.062322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.062357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.062549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.062583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.062751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.062809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.062997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.063044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.063235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.063272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.063442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.063476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.063624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.063657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.063814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.063846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.064022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.064054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.064233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.064266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.064452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.064485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.064662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.064694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.064892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.064925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.065134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.065167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.065363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.065395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.065552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.065585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.065753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.065800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.065957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.065993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.066198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.066231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.066411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.066451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.066601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.066634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.066839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.066871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.067051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.067084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.067285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.067318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.067507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.067540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.067719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.067766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.067990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.068031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.068216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.068249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.068438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.068472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.068621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.068655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.068837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.068870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.069077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.069109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.069287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.069319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.069500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.069533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.069695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.069749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.069959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.070007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.070202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.070239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.070453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.070488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.070642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.070675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.070819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.070852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.071068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.071102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.071284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.071317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.071500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.071534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.071711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.071744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.071904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.071947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.072206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.072239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.072421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.072462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.072645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.072679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.072830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.072863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.073012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.073045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.073298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.073337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.073497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.073532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.073724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.073772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.073938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.073977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.074153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.074186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.074331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.074376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.074566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.074599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.074757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.074792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.074939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.074972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.075161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.075193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.075343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.075376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.075548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.075581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.075731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.075763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.075919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.075952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.076123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.076155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.076313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.076346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.076546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.076580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.833 [2024-07-10 14:39:03.076759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.833 [2024-07-10 14:39:03.076806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.833 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.076999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.077041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.077220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.077255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.077444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.077478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.077623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.077657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.077867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.077901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.078076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.078109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.078263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.078296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.078456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.078491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.078664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.078711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.078873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.078907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.079111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.079144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.079292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.079326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.079508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.079557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.079741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.079790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.079984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.080020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.080225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.080258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.080443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.080477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.080625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.080659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.080825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.080861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.081044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.081078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.081261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.081295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.081455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.081488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.081629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.081662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.081832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.081880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.082092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.082127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.082281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.082320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.082531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.082565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.082738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.082770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.082923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.082955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.083129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.083162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.083335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.083381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.083554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.083590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.083743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.083777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.083952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.083985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.084138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.084171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.084319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.084352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.084536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.084582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.084756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.084803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.084994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.085029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.085246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.085280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.085460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.085496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.085659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.085696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.085885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.085919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.086102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.086135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.086309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.086341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.086507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.086541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.086732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.086764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.086911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.086943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.087145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.087178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.087355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.087388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.087558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.087593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.087814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.087862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.088069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.088106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.088302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.088337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.088501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.088535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.088701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.088735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.088894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.088940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.089205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.089238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.089432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.089466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.089629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.089665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.089875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.089923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.090080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.090116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.090288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.090331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.090520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.090554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.090705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.090738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.090890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.090928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.091106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.091138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.091288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.091320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.091515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.091548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.834 [2024-07-10 14:39:03.091748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.834 [2024-07-10 14:39:03.091795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.834 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.091978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.092013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.092191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.092224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.092373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.092406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.092611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.092658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.092843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.092877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.093085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.093118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.093289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.093323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.093497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.093542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.093725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.093759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.093947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.093980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.094143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.094177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.094336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.094369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.094545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.094578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.094730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.094762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.094940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.094972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.095151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.095183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.095353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.095385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.095613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.095661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.095852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.095887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.096037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.096071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.096222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.096256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.096444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.096479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.096642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.096676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.096855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.096888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.097045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.097078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.097265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.097312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.097509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.097556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.097744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.097778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.097957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.097990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.098168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.098201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.098377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.098409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.098570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.098604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.098820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.098857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.099026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.099073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.099226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.099261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.099442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.099482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.099650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.099685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.099859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.099891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.100052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.100084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.100269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.100305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.100488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.100523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.100670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.100702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.100879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.100912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.101061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.101094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.101242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.101275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.101446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.101480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.101636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.101682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.101886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.101934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.102128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.102163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.102349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.102383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.102566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.102599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.102815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.102847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.103214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.103262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.103447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.103482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.103665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.103699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.103865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.103900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.104104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.104137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.104290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.104324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.104489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.104523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.104745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.104792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.104953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.104989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.105142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.105175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.105360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.105394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.105572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.105619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.105816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.105852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.106041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.106074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.106250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.106282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.106502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.106550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.106754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.106801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.107026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.107061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.107240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.107272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.107432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.107465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.835 qpair failed and we were unable to recover it. 00:36:53.835 [2024-07-10 14:39:03.107664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.835 [2024-07-10 14:39:03.107696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.107873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.107906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.108106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.108139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.108290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.108326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.108502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.108535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.108681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.108740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.108934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.108966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.109122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.109155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.109335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.109367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.109546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.109578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.109751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.109798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.110018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.110053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.110205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.110240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.110431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.110465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.110644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.110693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.110893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.110928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.111084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.111119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.111269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.111303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.111473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.111522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.111723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.111768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.111960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.111994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.112135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.112168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.112346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.112378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.112545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.112578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.112808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.112856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.112942] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:53.836 [2024-07-10 14:39:03.112989] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:53.836 [2024-07-10 14:39:03.113017] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:53.836 [2024-07-10 14:39:03.113037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:53.836 [2024-07-10 14:39:03.113049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.113057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:53.836 [2024-07-10 14:39:03.113085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.113274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.113315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.113326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:36:53.836 [2024-07-10 14:39:03.113382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:36:53.836 [2024-07-10 14:39:03.113463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:36:53.836 [2024-07-10 14:39:03.113474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:36:53.836 [2024-07-10 14:39:03.113535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.113568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.113729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.113763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.113918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.113951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.114123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.114156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.114308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.114340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.114509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.114556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.114715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.114750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.114930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.114963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.115124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.115159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.115334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.115367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.115542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.115581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.115764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.115799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.115946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.115978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.116134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.116168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.116344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.116377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.116649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.116696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.116939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.116974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.117154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.117187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.117346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.117378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.117546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.117581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.117749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.117796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.117991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.118027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.118199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.118233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.118390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.118433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.118585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.118617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.118764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.118797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.118969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.119007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.119160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.119193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.119379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.119431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.119608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.119655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.119859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.119893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.120047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.120080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.120253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.120286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.120454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.120487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.120645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.120680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.120829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.120860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.121027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.121060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.121290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.121322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.121473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.121506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.121658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.121690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.836 qpair failed and we were unable to recover it. 00:36:53.836 [2024-07-10 14:39:03.121878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.836 [2024-07-10 14:39:03.121925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.122080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.122115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.122273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.122307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.122466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.122501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.122658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.122692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.122875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.122910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.123154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.123188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.123365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.123398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.123582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.123631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.123856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.123891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.124057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.124090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.124240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.124273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.124433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.124466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.124671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.124719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.124909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.124957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.125144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.125178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.125369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.125402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.125561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.125594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.125739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.125771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.125918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.125950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.126150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.126182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.126346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.126393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.126574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.126621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.126793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.126841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.126998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.127033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.127224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.127258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.127407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.127452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.127600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.127634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.127849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.127896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.128088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.128123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.128290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.128324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.128500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.128536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.128696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.128729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.128888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.128922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.129076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.129109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.129289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.129322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.129518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.129565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.129737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.129784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.129975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.130023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.130183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.130219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.130405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.130446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.130630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.130663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.130836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.130876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.131052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.131085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.131246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.131281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.131433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.131467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.131631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.131678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.131847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.131881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.132048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.132081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.132243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.132276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.132432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.132465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.132617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.132650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.132807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.132839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.133012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.133044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.133194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.133226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.133384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.133419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.133582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.133616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.133787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.133835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.134001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.134037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.134279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.134325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.134491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.134560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.134741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.134774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.134952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.134984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.135145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.135181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.135384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.135420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.135584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.135617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.135807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.135840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.136005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.136038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.136204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.136238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.136419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.136462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.136622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.136654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.136848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.136895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.137058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.137094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.137247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.137280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.137432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.137467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.137650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.137683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.137837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.137869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.138049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.837 [2024-07-10 14:39:03.138083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.837 qpair failed and we were unable to recover it. 00:36:53.837 [2024-07-10 14:39:03.138234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.138268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.138450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.138497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.138673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.138707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.138874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.138907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.139061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.139095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.139268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.139301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.139471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.139519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.139708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.139751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.139931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.139964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.140157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.140191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.140339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.140371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.140521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.140553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.140704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.140736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.140881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.140913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.141070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.141102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.141258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.141298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.141461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.141495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.141665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.141713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.141868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.141902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.142073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.142108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.142287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.142321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.142483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.142517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.142667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.142699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.142874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.142907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.143045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.143077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.143234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.143266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.143457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.143491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.143689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.143742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.143921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.143955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.144134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.144181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.144392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.144434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.144603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.144650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.144855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.144892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.145046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.145082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.145247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.145282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.145442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.145479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.145655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.145703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.145899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.145946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.146109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.146159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.146343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.146375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.146539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.146573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.146721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.146753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.146934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.146967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.147141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.147173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.147374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.147420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.147632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.147668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.147826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.147872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.148029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.148065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.148216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.148249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.148422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.148462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.148658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.148692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.148890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.148922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.149068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.149101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.149308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.149340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.149498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.149531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.149704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.149741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.149920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.149952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.150110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.150142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.150329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.150362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.150569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.150617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.150777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.150813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.150987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.151020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.151175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.151209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.151382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.151436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.151618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.151665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.151848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.151882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.152035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.152068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.152250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.152282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.152473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.152506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.152668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.152703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.152907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.152940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.153091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.153124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.153305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.153338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.153519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.153553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.153700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.153733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.153878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.153911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.838 qpair failed and we were unable to recover it. 00:36:53.838 [2024-07-10 14:39:03.154078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.838 [2024-07-10 14:39:03.154111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.154257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.154290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.154481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.154528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.154706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.154753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.154926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.154972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.155159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.155193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.155352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.155385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.155551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.155584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.155762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.155797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.155944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.155978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.156131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.156168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.156361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.156396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.156565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.156610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.156767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.156800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.156966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.157000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.157145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.157177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.157330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.157363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.157565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.157613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.157791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.157826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.157971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.158010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.158168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.158200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.158362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.158408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.158619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.158672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.158842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.158878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.159025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.159060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.159214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.159249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.159431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.159465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.159636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.159683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.159891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.159926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.160077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.160110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.160267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.160299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.160498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.160545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.160729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.160765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.160927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.160960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.161114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.161147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.161296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.161329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.161504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.161538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.161697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.161730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.161882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.161915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.162107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.162141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.162307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.162342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.162522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.162554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.162701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.162733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.162873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.162905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.163089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.163121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.163270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.163302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.163468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.163503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.163697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.163744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.163914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.163949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.164094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.164127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.164309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.164341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.164554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.164587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.164741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.164773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.164925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.164957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.165119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.165151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.165305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.165339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.165546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.165579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.165738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.165770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.165915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.165947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.166095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.166131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.166329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.166376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.166554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.166590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.166753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.166785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.166956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.166989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.167161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.167193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.167345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.167378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.167545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.167593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.167773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.167820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.167974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.168010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.168172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.168207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.839 [2024-07-10 14:39:03.168362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.839 [2024-07-10 14:39:03.168396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.839 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.168588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.168621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.168789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.168823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.168995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.169029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.169198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.169231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.169407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.169451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.169617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.169651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.169815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.169862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.170024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.170060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.170212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.170246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.170430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.170464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.170624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.170659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.170824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.170857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.171033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.171068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.171250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.171283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.171444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.171477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.171637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.171671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.171826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.171859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.172010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.172042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.172188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.172220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.172369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.172402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.172580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.172613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.172793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.172827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.172983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.173017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.173189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.173222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.173406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.173451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.173604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.173638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.173784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.173817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.173963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.173996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.174162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.174214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.174402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.174444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.174611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.174644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.174820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.174853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.174996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.175027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.175214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.175250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.175434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.175473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.175654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.175692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.175863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.175896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.176074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.176107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.176280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.176314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.176477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.176512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.176695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.176728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.176884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.176917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.177105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.177138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.177321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.177356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.177541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.177575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.177722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.177756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.177909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.177943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.178124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.178158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.178381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.178436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.178624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.178672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.178864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.178901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.179101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.179135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.179342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.179376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.179531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.179565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.179745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.179780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.179954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.180001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.180186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.180220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.180370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.180403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.180589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.180624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.180778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.180811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.180982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.181015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.181181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.181215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.181370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.181405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.181606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.181654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.181830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.181866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.182044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.182089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.182242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.182275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.182453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.182488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.182655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.182693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.182850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.182886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.183042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.183076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.183223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.183256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.183423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.183466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.183611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.183643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.183790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.183822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.184029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.184062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.184209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.184242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.184416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.840 [2024-07-10 14:39:03.184470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.840 qpair failed and we were unable to recover it. 00:36:53.840 [2024-07-10 14:39:03.184629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.184664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.184817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.184849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.185041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.185075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.185254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.185287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.185471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.185505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.185652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.185684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.185859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.185891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.186053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.186087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.186246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.186278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.186457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.186505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.186691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.186727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.186875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.186908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.187145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.187178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.187384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.187417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.187581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.187614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.187761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.187794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.187974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.188007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.188168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.188201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.188376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.188420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.188604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.188652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.188881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.188917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.189078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.189112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.189265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.189298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.189460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.189495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.189654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.189687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.189891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.189924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.190091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.190124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.190342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.190375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.190544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.190580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.190735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.190768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.190954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.190995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.191153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.191197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.191415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.191459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.191624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.191658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.191806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.191839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.192016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.192049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.192224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.192259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.192414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.192458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.192632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.192681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.192902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.192951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.193116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.193151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.193308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.193341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.193489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.193522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.193680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.193713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.193867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.193900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.194073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.194105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.194248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.194280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.194440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.194473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.194614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.194646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.194794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.194828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.194970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.195002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.195145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.195178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.195330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.195362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.195526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.195562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.195743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.195777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.195955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.196004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.196165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.196198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.196358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.196391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.196570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.196603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.196769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.196807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.196988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.197038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.197183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.197216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.197386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.197419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.197584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.197616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.197790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.197838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.198093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.198128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.198299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.198348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.198514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.198547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.198721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.198769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.198942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.198976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.199135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.199176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.199370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.841 [2024-07-10 14:39:03.199404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.841 qpair failed and we were unable to recover it. 00:36:53.841 [2024-07-10 14:39:03.199575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.199610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.199794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.199827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.199975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.200007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.200190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.200223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.200401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.200440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.200594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.200627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.200871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.200904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.201084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.201117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.201277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.201310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.201499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.201537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.201696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.201731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.201930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.201968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.202158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.202192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.202359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.202394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.202549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.202594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.202792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.202826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.202986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.203021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.203211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.203244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.203450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.203483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.203676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.203724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.203901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.203950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.204145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.204181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.204341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.204374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.204579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.204613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.204788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.204822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.205005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.205039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.205213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.205246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.205397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.205439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.205616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.205664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.205823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.205858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.206050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.206084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.206243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.206276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.206433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.206467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.206639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.206687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.206854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.206889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.207072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.207106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.207269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.207302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.207461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.207495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.207647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.207685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.207865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.207899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.208149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.208182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.208374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.208418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.208590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.208624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.208812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.208859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.209052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.209087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.209274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.209310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.209469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.209503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.209667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.209699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.209872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.209905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.210057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.210090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.210276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.210308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.210460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.210493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.210651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.210686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.210862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.210911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.211155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.211189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.211369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.211401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.211603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.211636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.211802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.211849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.212085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.212120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.212297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.212330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.212514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.212548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.212736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.212768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.212989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.213021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.213233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.213267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.213434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.213479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.213681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.213729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.213892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.213928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.214126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.214174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.214351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.214388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.214563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.214620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.214783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.214817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.214979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.215012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.215160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.215193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.215352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.215387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.842 [2024-07-10 14:39:03.215558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.842 [2024-07-10 14:39:03.215605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.842 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.215781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.215815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.216054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.216087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.216272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.216305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.216475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.216528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.216715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.216750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.216919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.216951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.217132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.217165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.217326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.217358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.217544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.217592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.217768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.217814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.218010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.218045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.218231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.218264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.218412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.218451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.218617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.218654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.218804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.218839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.219014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.219048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.219201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.219235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.219408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.219465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.219644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.219691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.219858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.219893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.220071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.220104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.220248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.220281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.220443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.220477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.220630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.220666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.220855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.220895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.221066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.221112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.221304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.221339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.221524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.221559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.221707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.221739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.221914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.221947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.222139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.222185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.222344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.222379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.222558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.222605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.222759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.222795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.222949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.222982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.223144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.223180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.223328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.223363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.223553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.223587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.223747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.223782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.223932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.223966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.224153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.224186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.224355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.224389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.224549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.224583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.224750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.224789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.224940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.224975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.225132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.225166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.225311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.225345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.225540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.225575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.225765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.225798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.225974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.226007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.226156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.226189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.226358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.226405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.226574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.226608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.226761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.226794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.226942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.226975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.227119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.227151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.227323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.227355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.227532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.227580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.227765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.227812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.227985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.228019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.228163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.228196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.228375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.228408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.228596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.228632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.228787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.228821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.228995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.229043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.229218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.229253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.229457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.229492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.229677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.229710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.229897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.229930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.230076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.843 [2024-07-10 14:39:03.230109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.843 qpair failed and we were unable to recover it. 00:36:53.843 [2024-07-10 14:39:03.230263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.230299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.230485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.230532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.230704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.230751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.230943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.230977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.231136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.231173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.231322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.231355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.231524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.231558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.231757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.231804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.232007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.232054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.232209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.232244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.232429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.232465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.232655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.232689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.232838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.232871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.233044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.233082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.233242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.233287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.233461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.233495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.233675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.233734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.233907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.233944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.234095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.234128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.234272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.234305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.234470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.234518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.234701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.234749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.234935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.234970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.235118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.235151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.235313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.235346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.235556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.235591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.235750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.235783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.235938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.235970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.236162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.236195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.236344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.236376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.236560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.236607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.236800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.236847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.237005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.237041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.237192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.237225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.237380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.237431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.237593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.237626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.237792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.237838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.238024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.238058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.238218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.238251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.238402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.238441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.238606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.238654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.238845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.238906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.239093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.239128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.239309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.239343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.239539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.239573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.239749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.239783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.239933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.239966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.240123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.240161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.240327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.240361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.240543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.240578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.240736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.240769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.240917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.240950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.241156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.241189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.241337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.241370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.241552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.241587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.241784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.241830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.241988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.242022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.242202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.242235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.242419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.242458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.242626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.242658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.242829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.242861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.243009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.243040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.243206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.243238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.243410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.243450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.243608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.243641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.243816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.243848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.244004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.244037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.244194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.244228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.244389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.244443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.244627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.244673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.244829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.244863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.245018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.245051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.245240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.245273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.245513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.245561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.245749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.245784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.245949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.245981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.246133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.844 [2024-07-10 14:39:03.246165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.844 qpair failed and we were unable to recover it. 00:36:53.844 [2024-07-10 14:39:03.246314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.246347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.246534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.246567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.246719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.246751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.246899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.246936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.247083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.247115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.247287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.247319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.247504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.247566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.247777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.247824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.248006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.248041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.248246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.248280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.248451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.248485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.248667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.248701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.248877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.248910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.249060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.249105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.249270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.249302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.249492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.249525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.249671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.249703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.249915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.249947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.250094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.250126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.250307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.250340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.250512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.250544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.250696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.250729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.250878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.250911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.251065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.251097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.251241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.251273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.251417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.251456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.251605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.251637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.251803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.251835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.251984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.252016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.252169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.252204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.252361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.252393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.252553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.252586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.252737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.252769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.252921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.252955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.253116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.253148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.253297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.253329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.253508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.253557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.253733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.253780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.253989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.254036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.254221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.254256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.254410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.254454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.254606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.254639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.254789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.254821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.254995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.255032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.255201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.255233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.255428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.255464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.255659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.255706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.255908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.255956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.256140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.256175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.256321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.256354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.256552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.256587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.256742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.256776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.256960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.257007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.257155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.257194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.257340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.257373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.257536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.257569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.257772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.257819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.257992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.258028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.258195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.258229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.258373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.258406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.258608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.258641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.258834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.258881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.259037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.259071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.259216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.259252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.259402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.259442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.259603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.259649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.259823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.259858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.260046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.260079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.260261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.260294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.260445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.260480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-07-10 14:39:03.260659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.845 [2024-07-10 14:39:03.260707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.260871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.260906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.261085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.261118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.261269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.261303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.261499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.261534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.261710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.261751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.261900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.261933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.262104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.262137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.262344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.262391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.262572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.262619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.262817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.262865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.263047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.263083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.263234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.263268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.263418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.263465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.263639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.263686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.263900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.263934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.264080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.264113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.264286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.264319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.264500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.264535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.264702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.264750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.264920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.264955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.265131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.265165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.265316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.265349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.265530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.265563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.265729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.265776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.265967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.266002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.266157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.266192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.266349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.266383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.266558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.266607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.266769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.266812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.266965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.266998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.267163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.267196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.267342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.267375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.267539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.267573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.267718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.267751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.267939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.267976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.268141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.268174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.268362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.268408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.268604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.268638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.268792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.268827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.268994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.269029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.269183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.269217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.269386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.269446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.269633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.269669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.269820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.269854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.270006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.270039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.270192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.270227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.270382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.270416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.270581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.270615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.270759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.270792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.270953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.270986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.271192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.271239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.271403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.271443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.271613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.271664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.271854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.271889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.272040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.272076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.272236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.272269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.272421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.272463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.272614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.272647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.272798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.272831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.273009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.273042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.273242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.273290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.273486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.273521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.273690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.273737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.273899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.273933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.274078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.274111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.274259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.274291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.274453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.274488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.274667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.274714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.274928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.274963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.275114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.275147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.275331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.275364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.275540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.275587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.275753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.275787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.275956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.275988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.276164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.276196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.276348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.276381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.276531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.276564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.276728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.846 [2024-07-10 14:39:03.276775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-07-10 14:39:03.276967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.277003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.277170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.277218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.277389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.277432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.277588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.277632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.277781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.277813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.278002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.278034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.278183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.278216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.278371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.278406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.278586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.278633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.278793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.278828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.278983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.279017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.279193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.279226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.279405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.279459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.279632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.279667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.279814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.279853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.280004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.280037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.280210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.280242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.280442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.280489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.280644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.280679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.280823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.280856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.281037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.281070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.281240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.281273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.281475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.281522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.281749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.281795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.281958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.282004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.282163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.282196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.282342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.282374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.282539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.282572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.282726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.282759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.282927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.282960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.283133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.283166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.283345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.283392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.283573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.283612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.283771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.283807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.283972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.284005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.284183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.284216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.284405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.284476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.284673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.284720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.284888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.284925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.285079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.285113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.285265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.285298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.285472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.285527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.285693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.285733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.285902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.285949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-07-10 14:39:03.286114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.847 [2024-07-10 14:39:03.286149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:53.847 qpair failed and we were unable to recover it. 00:36:54.126 [2024-07-10 14:39:03.286316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.126 [2024-07-10 14:39:03.286350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.126 qpair failed and we were unable to recover it. 00:36:54.126 [2024-07-10 14:39:03.286537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.126 [2024-07-10 14:39:03.286572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.126 qpair failed and we were unable to recover it. 00:36:54.126 [2024-07-10 14:39:03.286726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.126 [2024-07-10 14:39:03.286760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.126 qpair failed and we were unable to recover it. 00:36:54.126 [2024-07-10 14:39:03.286914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.126 [2024-07-10 14:39:03.286949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.126 qpair failed and we were unable to recover it. 00:36:54.126 [2024-07-10 14:39:03.287101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.126 [2024-07-10 14:39:03.287134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.126 qpair failed and we were unable to recover it. 00:36:54.126 [2024-07-10 14:39:03.287283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.126 [2024-07-10 14:39:03.287319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.126 qpair failed and we were unable to recover it. 00:36:54.126 [2024-07-10 14:39:03.287487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.126 [2024-07-10 14:39:03.287520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.126 qpair failed and we were unable to recover it. 00:36:54.126 [2024-07-10 14:39:03.287695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.287729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.287909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.287942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.288094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.288133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.288282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.288317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.288467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.288501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.288657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.288690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.288894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.288927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.289083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.289115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.289265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.289297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.289492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.289540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.289735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.289783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.289974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.290009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.290156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.290189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.290333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.290366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.290535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.290569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.290726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.290760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.290921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.290954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.291098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.291130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.291327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.291359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.291521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.291559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.291710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.291744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.291896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.291929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.292129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.292163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.292338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.292372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.292558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.292591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.292742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.292777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.292930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.292963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.293140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.293174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.293378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.293421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.293587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.293621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.293769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.293803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.293983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.294016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.294163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.294196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.294374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.294407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.294559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.294592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.294770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.294803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.294969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.295004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.295157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.295193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.295350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.295383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.295569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.127 [2024-07-10 14:39:03.295602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.127 qpair failed and we were unable to recover it. 00:36:54.127 [2024-07-10 14:39:03.295780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.295812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.295987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.296020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.296203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.296240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.296385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.296419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.296570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.296603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.296761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.296793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.296944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.296976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.297196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.297243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.297419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.297472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.297632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.297668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.297854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.297888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.298057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.298091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.298268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.298311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.298488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.298522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.298671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.298704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.298876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.298908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.299086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.299118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.299282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.299329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.299532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.299591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.299784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.299820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.299969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.300002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.300150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.300182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.300330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.300370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.300529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.300563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.300723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.300756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.300931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.300963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.301137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.301170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.301326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.301373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.301568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.301616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.301788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.301835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.301998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.302033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.302197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.302230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.302385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.302418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.302580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.302613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.302787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.302820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.302971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.303005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.303175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.303222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.303439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.303474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.303665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.303713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.303883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.303918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.128 [2024-07-10 14:39:03.304128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.128 [2024-07-10 14:39:03.304162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.128 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.304324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.304357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.304522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.304560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.304738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.304808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.304997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.305031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.305202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.305234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.305401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.305446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.305613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.305660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.305850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.305885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.306091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.306124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.306264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.306296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.306480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.306513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.306690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.306737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.306894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.306929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.307079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.307112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.307328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.307361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.307570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.307617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.307790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.307838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.308027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.308062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.308211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.308244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.308397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.308438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.308599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.308632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.308805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.308852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.309026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.309061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.309216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.309249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.309420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.309460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.309615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.309649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.309836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.309868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.310014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.310047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.310206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.310239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.310399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.310442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.310593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.310626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.310782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.310815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.310963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.310997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.311147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.311183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.311337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.311370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.311561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.311595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.311756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.311789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.311936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.311970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.312149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.312196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.312382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.129 [2024-07-10 14:39:03.312417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.129 qpair failed and we were unable to recover it. 00:36:54.129 [2024-07-10 14:39:03.312573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.312607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.312788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.312826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.312981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.313015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.313176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.313211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.313362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.313397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.313562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.313596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.313770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.313818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.313976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.314011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.314159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.314192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.314368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.314401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.314596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.314632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.314777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.314810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.314981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.315014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.315165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.315196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.315342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.315374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.315528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.315561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.315728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.315760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.315931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.315963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.316137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.316169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.316350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.316382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.316548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.316595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.316787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.316835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.317003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.317040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.317224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.317257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.317405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.317452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.317632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.317679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.317881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.317915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.318096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.318129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.318285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.318319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.318481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.318516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.318699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.318746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.318909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.318943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.319126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.319159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.319310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.319342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.319493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.319527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.319745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.319792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.319979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.320013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.320211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.320247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.320391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.320440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.320595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.130 [2024-07-10 14:39:03.320628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.130 qpair failed and we were unable to recover it. 00:36:54.130 [2024-07-10 14:39:03.320776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.320810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.320990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.321027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.321209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.321242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.321385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.321420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.321630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.321676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.321845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.321879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.322024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.322057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.322263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.322295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.322471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.322520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.322727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.322783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.322972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.323007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.323162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.323197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.323380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.323414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.323593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.323639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.323858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.323905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.324089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.324124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.324306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.324351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.324524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.324558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.324697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.324730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.324897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.324929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.325120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.325157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.325333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.325365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.325555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.325588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.325733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.325776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.325940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.325973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.326132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.326164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.326308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.326340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.326522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.326556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.326749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.326795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.326956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.326992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.327166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.327200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.327350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.327384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.327569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.327615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.327782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.327830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.131 qpair failed and we were unable to recover it. 00:36:54.131 [2024-07-10 14:39:03.327996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.131 [2024-07-10 14:39:03.328031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.328214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.328249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.328433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.328467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.328617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.328650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.328827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.328860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.329053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.329086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.329285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.329319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.329487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.329526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.329684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.329719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.329881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.329916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.330087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.330120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.330293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.330331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.330508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.330556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.330717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.330753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.330924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.330958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.331121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.331154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.331310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.331343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.331526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.331562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.331710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.331744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.331893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.331927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.332094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.332127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.332325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.332373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.332550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.332597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.332767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.332814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.332980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.333016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.333168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.333202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.333348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.333381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.333552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.333588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.333773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.333808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.333963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.333995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.334145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.334181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.334358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.334392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.334564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.334612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.334773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.334808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.334966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.335000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.335176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.335209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.335366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.335399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.335587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.335620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.335770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.335805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.335977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.336011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.336199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.132 [2024-07-10 14:39:03.336232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.132 qpair failed and we were unable to recover it. 00:36:54.132 [2024-07-10 14:39:03.336399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.336465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.336666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.336723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.336882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.336917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.337101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.337133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.337336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.337368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.337543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.337577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.337730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.337768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.337952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.337985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.338186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.338218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.338378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.338413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.338585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.338618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.338817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.338865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.339066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.339100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.339249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.339284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.339432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.339473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.339623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.339657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.339805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.339838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.339989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.340022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.340176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.340209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.340361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.340394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.340550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.340582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.340730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.340762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.340904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.340947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.341095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.341127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.341297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.341330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.341472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.341505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.341643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.341676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.341827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.341859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.342005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.342038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.342185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.342217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.342388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.342445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.342631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.342680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.342894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.342941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.343130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.343162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.343366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.343399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.343572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.343605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.343784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.343817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.343993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.344070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.344225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.344260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.344410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.344463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.133 [2024-07-10 14:39:03.344622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.133 [2024-07-10 14:39:03.344654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.133 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.344807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.344839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.344990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.345022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.345208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.345240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.345413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.345473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.345655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.345702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.345895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.345935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.346119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.346153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.346341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.346374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.346557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.346606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.346857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.346895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.347052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.347097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.347268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.347302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.347503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.347538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.347741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.347789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.347960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.348007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.348165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.348199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.348460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.348494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.348647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.348679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.348948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.348980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.349146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.349179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.349345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.349378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.349548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.349582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.349740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.349773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.349926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.349958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.350107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.350140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.350347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.350395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.350566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.350601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.350819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.350868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.351065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.351100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.351258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.351293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.351482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.351516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.351690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.351737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.352001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.352042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.352212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.352245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.352443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.352476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.352649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.352681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.352830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.352864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.353014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.353047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.353195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.353227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.353385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.353418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.134 qpair failed and we were unable to recover it. 00:36:54.134 [2024-07-10 14:39:03.353584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.134 [2024-07-10 14:39:03.353618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.353816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.353864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.354030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.354067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.354246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.354279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.354436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.354482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.354690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.354738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.354914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.354959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.355126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.355161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.355336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.355370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.355541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.355575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.355737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.355770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.355959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.355992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.356169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.356202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.356381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.356414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.356590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.356623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.356807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.356841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.357017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.357050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.357212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.357248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.357446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.357491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.357691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.357724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.357876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.357909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.358108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.358156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.358350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.358386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.358550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.358585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.358747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.358782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.358976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.359023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.359180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.359214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.359376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.359410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.359584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.359619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.359815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.359849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.360000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.360048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.360208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.360243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.360417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.360482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.360660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.360707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.360883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.360918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.135 [2024-07-10 14:39:03.361079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.135 [2024-07-10 14:39:03.361112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.135 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.361283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.361315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.361528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.361576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.361740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.361775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.361933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.361968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.362161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.362194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.362373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.362406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.362594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.362628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.362809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.362843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.363004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.363037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.363189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.363221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.363406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.363445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.363661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.363712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.363882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.363919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.364082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.364115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.364269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.364302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.364483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.364526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.364677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.364711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.364859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.364892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.365095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.365128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.365291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.365344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.365535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.365570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.365755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.365808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.365973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.366010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.366182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.366216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.366397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.366436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.366588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.366623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.366786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.366820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.366968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.367002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.367183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.367231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.367423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.367469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.367620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.367653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.367799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.367833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.367981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.368014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.368207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.368240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.368411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.368452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.368651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.368698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.368881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.368925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.369109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.369145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.369294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.369328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.369535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.136 [2024-07-10 14:39:03.369570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.136 qpair failed and we were unable to recover it. 00:36:54.136 [2024-07-10 14:39:03.369750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.369784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.369966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.369999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.370167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.370200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.370350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.370382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.370565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.370612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.370780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.370816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.370990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.371024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.371176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.371209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.371385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.371418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.371578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.371611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.371766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.371799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.371945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.371978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.372153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.372186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.372337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.372370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.372537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.372571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.372719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.372752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.372906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.372939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.373117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.373151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.373358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.373395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.373554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.373588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.373734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.373767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.373923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.373957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.374108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.374141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.374326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.374360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.374527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.374575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.374758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.374805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.375004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.375051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.375237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.375271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.375436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.375470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.375620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.375654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.375829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.375863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.376044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.376077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.376227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.376261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.376413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.376467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.376619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.376654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.376835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.376869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.377020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.377057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.377205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.377238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.377419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.377461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.137 [2024-07-10 14:39:03.377654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.137 [2024-07-10 14:39:03.377702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.137 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.377870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.377917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.378080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.378116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.378300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.378334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.378486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.378520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.378667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.378701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.378878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.378912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.379095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.379128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.379282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.379315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.379482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.379518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.379670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.379703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.379882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.379915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.380058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.380091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.380237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.380270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.380435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.380482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.380645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.380680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.380835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.380869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.381034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.381068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.381250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.381283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.381435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.381468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.381658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.381691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.381847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.381882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.382038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.382072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.382247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.382281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.382460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.382494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.382665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.382701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.382853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.382887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.383062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.383095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.383277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.383310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.383490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.383523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.383682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.383715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.383884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.383931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.384116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.384151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.384298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.384331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.384496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.384530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.138 qpair failed and we were unable to recover it. 00:36:54.138 [2024-07-10 14:39:03.384693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.138 [2024-07-10 14:39:03.384737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.384896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.384929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.385091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.385129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.385278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.385312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.385483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.385519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.385706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.385753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.385944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.385979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.386143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.386176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.386350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.386382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.386554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.386589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.386737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.386771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.386927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.386960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.387115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.387147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.387297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.387330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.387512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.387545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.387699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.387732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.387891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.387925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.388104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.388137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.388290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.388322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.388495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.388529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.388708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.388741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.388892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.388924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.389100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.389132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.389281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.389313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.389472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.389506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.389681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.389728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.389915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.389951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.390126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.390159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.390308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.390341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.390505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.390539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.390693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.390726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.390876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.390909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.139 qpair failed and we were unable to recover it. 00:36:54.139 [2024-07-10 14:39:03.391081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.139 [2024-07-10 14:39:03.391115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.391295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.391331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.391481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.391514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.391694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.391727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.391880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.391912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.392063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.392097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.392270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.392303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.392460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.392493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.392642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.392675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.392826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.392858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.393009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.393047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.393203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.393236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.393380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.393412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.393563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.393596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.393764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.393796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.393945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.393978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.394139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.394171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.394321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.394365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.394517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.394551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.394710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.394743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.394889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.394921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.395093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.395125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.395286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.395319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.395466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.395499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.395649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.395682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.395829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.395862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.396032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.396064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.396265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.396298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.396452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.396485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.396635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.396667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.396817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.396850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.397017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.397049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.397198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.397231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.397389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.397422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.397587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.397619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.397774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.397807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.397956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.397989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.140 [2024-07-10 14:39:03.398148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.140 [2024-07-10 14:39:03.398181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.140 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.398323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.398356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.398524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.398556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.398739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.398771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.398928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.398960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.399120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.399152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.399317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.399350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.399499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.399532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.399689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.399721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.399868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.399900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.400088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.400121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.400265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.400297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.400454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.400496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.400675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.400712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.400864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.400898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.401070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.401103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.401290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.401322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.401512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.401546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.401717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.401750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.401944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.401977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.402152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.402185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.402360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.402392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.402579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.402612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.402762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.402794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.402937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.402970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.403164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.403196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.403351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.403385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.403543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.403576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.403721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.403753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.403899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.403931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.404089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.404123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.404269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.404301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.404469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.404503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.404654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.404686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.404846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.404879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.405029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.405063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.405236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.405269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.405413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.405451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.405621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.405654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.405793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.405825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.405988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.406021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.141 qpair failed and we were unable to recover it. 00:36:54.141 [2024-07-10 14:39:03.406200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.141 [2024-07-10 14:39:03.406233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.406397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.406434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.406587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.406619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.406796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.406839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.407019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.407051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.407195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.407227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.407381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.407413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.407562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.407595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.407739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.407771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.407945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.407977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.408155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.408188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.408330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.408362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.408526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.408563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.408723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.408755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.408909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.408943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.409099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.409131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.409310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.409342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.409495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.409528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.409676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.409709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.409851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.409884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.410044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.410076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.410261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.410295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.410468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.410501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.410654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.410686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.410831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.410863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.411033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.411066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.411209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.411241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.411416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.411454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.411646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.411679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.411850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.411882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.412030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.412063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.412228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.412260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.412455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.412495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.412680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.412712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.412885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.412917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.413061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.413094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.413265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.413297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.413441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.413473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.413619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.413651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.413799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.413831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.414020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.142 [2024-07-10 14:39:03.414052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.142 qpair failed and we were unable to recover it. 00:36:54.142 [2024-07-10 14:39:03.414205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.414236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.414383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.414416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.414566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.414599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.414752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.414784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.414934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.414966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.415120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.415152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.415294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.415326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.415485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.415518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.415663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.415696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.415845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.415877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.416036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.416068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.416223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.416260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.416418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.416465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.416650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.416683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.416829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.416861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.417012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.417044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.417191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.417223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.417373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.417405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.417562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.417595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.417786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.417818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.417965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.417997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.418178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.418210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.418362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.418394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.418547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.418580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.418720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.418752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.418936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.418968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.419156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.419199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.419340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.419372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.419534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.419566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.419720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.419753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.419895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.419928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.420086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.420118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.420276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.420309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.420463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.420496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.420642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.420674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.420863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.420895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.421095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.421128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.421275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.421307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.421469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.421502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.421653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.421685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.143 qpair failed and we were unable to recover it. 00:36:54.143 [2024-07-10 14:39:03.421828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.143 [2024-07-10 14:39:03.421860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.422004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.422037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.422201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.422233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.422397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.422434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.422627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.422660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.422804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.422836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.422984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.423015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.423187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.423219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.423391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.423428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.423571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.423603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.423747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.423779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.423924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.423962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.424146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.424178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.424353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.424385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.424540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.424573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.424716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.424748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.424901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.424933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.425108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.425140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.425325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.425357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.425504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.425537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.425704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.425735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.425910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.425942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.426090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.426123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.426297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.426328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.426502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.426535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.426690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.426722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.426864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.426896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.427047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.427080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.427240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.427272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.427436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.427469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.427615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.427647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.427792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.427831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.428004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.428036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.428189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.428221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.428384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.144 [2024-07-10 14:39:03.428417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.144 qpair failed and we were unable to recover it. 00:36:54.144 [2024-07-10 14:39:03.428570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.428602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.428752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.428784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.428930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.428962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.429117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.429149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.429316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.429348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.429499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.429532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.429680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.429712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.429887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.429919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.430064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.430096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.430270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.430302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.430458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.430490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.430640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.430672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.430819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.430850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.431027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.431059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.431248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.431280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.431459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.431503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.431661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.431697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.431874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.431906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.432090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.432122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.432268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.432300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.432481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.432515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.432660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.432692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.432860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.432892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.433062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.433094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.433242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.433274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.433417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.433457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.433611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.433643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.433819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.433851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.433996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.434028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.434200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.434233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.434385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.434418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.434607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.434639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.434813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.434845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.435001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.435033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.435183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.435216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.435400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.435438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.435586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.435618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.435793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.435825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.436007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.436039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.436194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.436227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.145 qpair failed and we were unable to recover it. 00:36:54.145 [2024-07-10 14:39:03.436376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.145 [2024-07-10 14:39:03.436409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.436639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.436672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.436837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.436870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.437029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.437068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.437242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.437274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.437423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.437463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.437618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.437650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.437829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.437861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.438039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.438073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.438230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.438263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.438437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.438471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.438649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.438681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.438856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.438887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.439060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.439092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.439286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.439319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.439488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.439521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.439688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.439733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.439882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.439915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.440068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.440100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.440250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.440283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.440461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.440493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.440650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.440682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.440831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.440863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.441016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.441049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.441210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.441243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.441403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.441441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.441625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.441657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.441821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.441854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.442026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.442059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.442234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.442266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.442418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.442455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.442596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.442629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.442769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.442801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.442957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.442989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.443135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.443167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.443360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.443393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.443559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.443591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.443771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.443803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.443955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.443988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.444141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.444184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.146 qpair failed and we were unable to recover it. 00:36:54.146 [2024-07-10 14:39:03.444340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.146 [2024-07-10 14:39:03.444372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.444550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.444584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.444741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.444775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.444929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.444963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.445141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.445173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.445330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.445363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.445537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.445570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.445718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.445750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.445931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.445963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.446117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.446149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.446304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.446336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.446502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.446535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.446682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.446714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.446867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.446901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.447049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.447081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.447262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.447294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.447497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.447534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.447735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.447767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.447909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.447941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.448145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.448177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.448332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.448364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.448521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.448553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.448703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.448735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.448924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.448956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.449130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.449163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.449312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.449344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.449501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.449533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.449698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.449730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.449875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.449907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.450108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.450141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.450322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.450354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.450540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.450572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.450732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.450764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.450911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.450943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.451098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.451131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.451287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.451319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.451518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.451562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.451706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.451747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.451903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.451935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.452095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.147 [2024-07-10 14:39:03.452127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.147 qpair failed and we were unable to recover it. 00:36:54.147 [2024-07-10 14:39:03.452300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.452332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.452506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.452539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.452681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.452713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.452874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.452910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.453063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.453095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.453248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.453280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.453453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.453485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.453637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.453669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.453846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.453878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.454024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.454056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.454235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.454267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.454412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.454450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.454609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.454642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.454789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.454822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.454978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.455011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.455162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.455195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.455373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.455405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.455571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.455604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.455780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.455812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.455972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.456005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.456193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.456225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.456372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.456404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.456591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.456623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.456772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.456815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.456957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.456989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.457168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.457200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.457371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.457404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.457580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.457613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.457762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.457794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.457936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.457968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.458117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.458149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.458298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.458330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.458514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.458547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.458740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.148 [2024-07-10 14:39:03.458773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.148 qpair failed and we were unable to recover it. 00:36:54.148 [2024-07-10 14:39:03.458941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.458973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.459145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.459177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.459366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.459398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.459583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.459617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.459793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.459826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.459975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.460007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.460175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.460207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.460390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.460422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.460585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.460617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.460762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.460799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.460952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.460985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.461154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.461186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.461344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.461377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.461588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.461620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.461797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.461829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.462000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.462033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.462178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.462210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.462361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.462399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.462549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.462582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.462736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.462768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.462918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.462950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.463119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.463150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.463325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.463358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.463502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.463535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.463687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.463718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.463860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.463892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.464040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.464072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.464255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.464287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.464468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.464500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.464639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.464672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.464841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.464873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.465014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.465046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.465222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.465254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.465409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.465447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.465609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.465641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.465790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.149 [2024-07-10 14:39:03.465823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.149 qpair failed and we were unable to recover it. 00:36:54.149 [2024-07-10 14:39:03.466027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.466060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.466226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.466259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.466421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.466458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.466613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.466647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.466798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.466830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.466975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.467007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.467155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.467187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.467334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.467366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.467543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.467576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.467728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.467760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.467941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.467973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.468148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.468180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.468343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.468375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.468542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.468579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.468731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.468763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.468909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.468941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.469091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.469123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.469271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.469314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.469476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.469508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.469661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.469695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.469874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.469907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.470086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.470118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.470295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.470327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.470492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.470524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.470673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.470705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.470878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.470911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.471072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.471105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.471258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.471291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.471444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.471478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.471633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.471666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.471829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.471862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.472046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.472078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.472236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.472271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.472429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.472463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.472634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.472674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.472839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.472872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.473032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.473065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.473236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.473268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.473448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.473481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.473635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.473669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.150 [2024-07-10 14:39:03.473848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.150 [2024-07-10 14:39:03.473881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.150 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.474081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.474113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.474285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.474317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.474475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.474507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.474655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.474687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.474851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.474883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.475056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.475088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.475233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.475266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.475411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.475448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.475608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.475641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.475792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.475824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.475970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.476002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.476175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.476208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.476381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.476419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.476587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.476619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.476779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.476812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.476996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.477030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.477197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.477230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.477384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.477416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.477582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.477615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.477775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.477807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.477951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.477983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.478131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.478163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.478335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.478367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.478565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.478597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.478745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.478778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.478956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.478989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.479137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.479169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.479348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.479381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.479572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.479605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.479750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.479782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.479922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.479955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.480109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.480141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.480293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.480326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.480495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.480528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.480684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.480718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.480861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.480893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.481077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.481110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.481253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.481285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.481436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.481468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.481639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.151 [2024-07-10 14:39:03.481672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.151 qpair failed and we were unable to recover it. 00:36:54.151 [2024-07-10 14:39:03.481823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.481866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.482010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.482042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.482191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.482225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.482377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.482411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.482567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.482599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.482751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.482783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.482974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.483007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.483156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.483190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.483382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.483418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.483589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.483620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.483765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.483797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.483952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.483984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.484167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.484203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.484355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.484387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.484559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.484591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.484736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.484769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.484912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.484945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.485090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.485122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.485337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.485370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.485568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.485601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.485779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.485811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.485956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.485988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.486133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.486165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.486309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.486341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.486502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.486536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.486735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.486768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.486934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.486966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.487118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.487155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.487298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.487330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.487502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.487535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.487684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.487717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.487881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.487913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.488087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.488119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.488275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.488308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.488512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.488544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.488698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.488731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.488903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.488936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.489095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.489127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.489301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.489333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.489531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.489565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.152 [2024-07-10 14:39:03.489705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.152 [2024-07-10 14:39:03.489738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.152 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.489889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.489921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.490070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.490103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.490257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.490289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.490458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.490491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.490644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.490682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.490855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.490888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.491050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.491082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.491243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.491276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.491447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.491480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.491628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.491660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.491817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.491849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.491993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.492030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.492174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.492206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.492363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.492397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.492569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.492610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.492766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.492799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.492951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.492983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.493128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.493160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.493310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.493342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.493535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.493567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.493728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.493761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.493933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.493965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.494109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.494141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.494342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.494386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.494544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.494577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.494728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.494761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.494944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.494976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.495146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.495179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.495335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.495368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.495527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.495560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.495763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.495795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.495952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.495985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.496176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.496218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.496359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.496392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.153 qpair failed and we were unable to recover it. 00:36:54.153 [2024-07-10 14:39:03.496555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.153 [2024-07-10 14:39:03.496588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.496740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.496772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.496918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.496950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.497094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.497127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.497294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.497326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.497485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.497519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.497711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.497744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.497915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.497949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.498100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.498133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.498286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.498318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.498487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.498520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.498667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.498700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.498886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.498919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.499088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.499121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.499272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.499304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.499465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.499498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.499668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.499700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.499852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.499892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.500045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.500078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.500268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.500300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.500446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.500483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.500628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.500661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.500840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.500872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.501038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.501070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.501245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.501278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.501438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.501471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.501614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.501646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.501795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.501827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.501988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.502021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.502172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.502205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.502352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.502384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.502553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.502585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.502737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.502769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.502914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.502946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.503110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.503142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.503286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.503318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.503473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.503507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.154 [2024-07-10 14:39:03.503679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.154 [2024-07-10 14:39:03.503711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.154 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.503871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.503904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.504049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.504082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.504258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.504290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.504458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.504491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.504642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.504675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.504859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.504892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.505078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.505110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.505256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.505289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.505452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.505484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.505661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.505693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.505848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.505880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.506040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.506073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.506225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.506258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.506418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.506467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.506653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.506685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.506841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.506884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.507044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.507076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.507239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.507272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.507463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.507496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.507640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.507677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.507848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.507881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.508034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.508068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.508209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.508241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.508387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.508419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.508588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.508620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.508769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.508801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.508940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.508972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.509131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.509164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.509342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.509374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.509548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.509580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.509735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.509767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.509946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.509979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.510156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.510188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.510362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.510395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.510558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.510591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.510734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.510768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.510970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.511003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.511148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.511179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.511324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.511356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.155 [2024-07-10 14:39:03.511537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.155 [2024-07-10 14:39:03.511571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.155 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.511728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.511761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.511940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.511973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.512126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.512182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.512358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.512391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.512540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.512573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.512729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.512762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.512938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.512971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.513178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.513210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.513359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.513391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.513613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.513646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.513877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.513910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.514084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.514116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.514285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.514317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.514491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.514524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.514704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.514736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.514902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.514935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.515111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.515144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.515297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.515331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.515484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.515518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.515679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.515716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.515865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.515898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.516052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.516087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.516235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.516267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.516444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.516483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.516672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.516704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.516905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.516938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.517122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.517156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.517311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.517344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.517537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.517570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.517757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.517789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.517930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.517963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.518137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.518170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.518345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.518378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.518557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.518590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.518752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.518785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.518931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.518963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.519147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.519180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.519370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.519403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.519558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.519591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.156 [2024-07-10 14:39:03.519800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.156 [2024-07-10 14:39:03.519843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.156 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.519988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.520020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.520191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.520223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.520412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.520451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.520610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.520642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.520796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.520828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.520971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.521003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.521151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.521184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.521341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.521374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.521567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.521600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.521747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.521780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.521953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.521985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.522145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.522178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.522322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.522354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.522498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.522530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.522707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.522740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.522884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.522917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.523085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.523118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.523278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.523312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.523464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.523498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.523648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.523686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.523862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.523894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.524054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.524087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.524234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.524267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.524440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.524474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.524620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.524654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.524836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.524868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.525012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.525045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.525191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.525223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.525380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.525418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.525576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.525608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.525762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.525795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.525969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.526002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.526155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.526188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.526351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.526385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.157 [2024-07-10 14:39:03.526580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.157 [2024-07-10 14:39:03.526613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.157 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.526760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.526793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.526950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.526982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.527141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.527174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.527322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.527355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.527505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.527539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.527684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.527716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.527860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.527892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.528036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.528068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.528228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.528261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.528420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.528457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.528608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.528640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.528818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.528851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.529017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.529050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.529213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.529245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.529422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.529460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.529613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.529647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.529812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.529844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.530017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.530049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.530224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.530256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.530398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.530447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.530624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.530657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.530826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.530859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.531032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.531064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.531209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.531241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.531385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.531421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.531632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.531665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.531840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.531873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.532052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.532085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.532280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.532323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.532480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.532514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.532670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.532702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.532846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.532878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.533032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.533064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.533237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.533269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.533428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.533463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.533615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.533648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.533860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.533892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.534062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.534094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.534251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.534284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.534443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.158 [2024-07-10 14:39:03.534475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.158 qpair failed and we were unable to recover it. 00:36:54.158 [2024-07-10 14:39:03.534667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.534699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.534863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.534895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.535041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.535073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.535255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.535287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.535443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.535476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.535640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.535672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.535825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.535859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.536008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.536042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.536196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.536228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.536406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.536454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.536630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.536662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.536856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.536888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.537037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.537069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.537216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.537248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.537400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.537451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.537624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.537656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.537823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.537856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.537995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.538027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.538176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.538208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.538379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.538411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.538570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.538602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.538746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.538778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.538953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.538985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.539128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.539160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.539308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.539344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.539520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.539553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.539714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.539746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.539918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.539951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.540096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.540129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.540274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.540306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.540476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.540508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.540657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.540689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.540845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.540877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.541023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.541055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.541229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.541263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.541417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.541454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.541599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.541631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.541816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.541848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.542025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.542058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.542220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.159 [2024-07-10 14:39:03.542253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.159 qpair failed and we were unable to recover it. 00:36:54.159 [2024-07-10 14:39:03.542394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.542437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.542597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.542631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.542780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.542812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.542975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.543007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.543182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.543215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.543370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.543402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.543563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.543595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.543739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.543770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.543916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.543948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.544097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.544130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.544304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.544336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.544483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.544516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.544693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.544755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.544921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.544953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.545115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.545147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.545294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.545326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.545488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.545521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.545667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.545699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.545858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.545891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.546044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.546076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.546220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.546252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.546419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.546458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.546609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.546641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.546783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.546816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.546972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.547008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.547166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.547198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.547341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.547373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.547531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.547565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.547731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.547763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.547926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.547959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.548156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.548189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.548337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.548369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.548527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.548560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.548749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.548781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.548929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.548962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.549166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.549198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.549355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.549389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.549546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.549579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.549729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.549762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.549906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.549938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.550084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.160 [2024-07-10 14:39:03.550117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.160 qpair failed and we were unable to recover it. 00:36:54.160 [2024-07-10 14:39:03.550297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.550329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.550481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.550515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.550673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.550706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.550896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.550929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.551084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.551116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.551263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.551296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.551452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.551485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.551686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.551718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.551884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.551917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.552080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.552112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.552261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.552297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.552452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.552485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.552637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.552669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.552815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.552847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.552998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.553031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.553185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.553218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.553374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.553416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.553572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.553604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.553752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.553784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.553925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.553958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.554112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.554144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.554318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.554350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.554518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.554552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.554726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.554758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.554924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.554956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.555132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.555164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.555338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.555371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.555543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.555575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.555747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.555780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.555928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.555961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.556109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.556142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.556333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.556365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.556512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.556544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.556699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.556731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.556891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.556923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.161 [2024-07-10 14:39:03.557074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.161 [2024-07-10 14:39:03.557116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.161 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.557273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.557305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.557467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.557500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.557650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.557690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.557840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.557872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.558079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.558111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.558253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.558286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.558438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.558470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.558614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.558647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.558799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.558832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.558979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.559011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.559195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.559227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.559373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.559405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.559571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.559604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.559762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.559795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.559985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.560021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.560167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.560199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.560378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.560412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.560573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.560605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.560758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.560790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.560932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.560965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.561139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.561171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.561322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.561354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.561529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.561562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.561736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.561768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.561916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.561949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.562103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.562135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.562294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.562327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.562481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.562514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.562720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.562753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.562915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.562947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.563098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.563131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.563285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.563318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.563472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.563504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.563659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.563691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.563850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.563883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.564051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.564083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.564238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.564269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.564437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.564469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.564628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.564661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.564836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.564869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.162 qpair failed and we were unable to recover it. 00:36:54.162 [2024-07-10 14:39:03.565016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.162 [2024-07-10 14:39:03.565048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.565216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.565248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.565391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.565429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.565617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.565649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.565800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.565832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.565980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.566013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.566159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.566191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.566345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.566378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.566581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.566613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.566768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.566800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.566976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.567008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.567177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.567210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.567385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.567417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.567571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.567603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.567781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.567817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.567994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.568027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.568201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.568233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.568376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.568409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.568560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.568593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.568763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.568796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.568968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.569000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.569171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.569203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.569348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.569380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.569604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.569656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.569833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.569870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.570039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.570073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.570230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.570263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.570412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.570463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.570618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.570652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.570836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.570869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.571024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.571058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.571238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.571272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.571436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.571471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.571624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.571657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.571833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.571866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.572019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.572053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.572229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.572275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.572439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.572473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.572648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.572682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.572856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.572890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.573066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.163 [2024-07-10 14:39:03.573100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.163 qpair failed and we were unable to recover it. 00:36:54.163 [2024-07-10 14:39:03.573252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.573286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.573475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.573526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.573711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.573747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.573926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.573969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.574120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.574153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.574310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.574345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.574531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.574566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.574739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.574773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.574952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.574984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.575137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.575170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.575320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.575353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.575504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.575538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.575690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.575724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.575878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.575916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.576125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.576185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.576373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.576409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.576568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.576601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.576783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.576815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.576986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.577018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.577170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.577202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.577345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.577378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.577539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.577574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.577727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.577761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.577935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.577968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.578143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.578187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.578332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.578365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.578556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.578590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.578756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.578788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.578958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.578991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.579139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.579172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.579346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.579379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.579549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.579583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.579741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.579774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.579926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.579958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.580118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.580150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.580315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.580347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.580500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.580535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.580687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.580720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.580895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.580927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.581096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.581128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.581284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.581318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.581470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.164 [2024-07-10 14:39:03.581504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.164 qpair failed and we were unable to recover it. 00:36:54.164 [2024-07-10 14:39:03.581710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.165 [2024-07-10 14:39:03.581757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.165 qpair failed and we were unable to recover it. 00:36:54.165 [2024-07-10 14:39:03.581946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.165 [2024-07-10 14:39:03.581994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.165 qpair failed and we were unable to recover it. 00:36:54.165 [2024-07-10 14:39:03.582204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.165 [2024-07-10 14:39:03.582237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.165 qpair failed and we were unable to recover it. 00:36:54.165 [2024-07-10 14:39:03.582387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.165 [2024-07-10 14:39:03.582419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.165 qpair failed and we were unable to recover it. 00:36:54.165 [2024-07-10 14:39:03.582624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.165 [2024-07-10 14:39:03.582661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.165 qpair failed and we were unable to recover it. 00:36:54.165 [2024-07-10 14:39:03.582814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.165 [2024-07-10 14:39:03.582847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.165 qpair failed and we were unable to recover it. 00:36:54.165 [2024-07-10 14:39:03.583026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.165 [2024-07-10 14:39:03.583059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.165 qpair failed and we were unable to recover it. 00:36:54.165 [2024-07-10 14:39:03.583239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.165 [2024-07-10 14:39:03.583271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.165 qpair failed and we were unable to recover it. 00:36:54.165 [2024-07-10 14:39:03.583448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.165 [2024-07-10 14:39:03.583481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.165 qpair failed and we were unable to recover it. 00:36:54.165 [2024-07-10 14:39:03.583628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.165 [2024-07-10 14:39:03.583660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.165 qpair failed and we were unable to recover it. 00:36:54.165 [2024-07-10 14:39:03.583838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.165 [2024-07-10 14:39:03.583887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.165 qpair failed and we were unable to recover it. 00:36:54.165 [2024-07-10 14:39:03.584086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.165 [2024-07-10 14:39:03.584126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.165 qpair failed and we were unable to recover it. 00:36:54.165 [2024-07-10 14:39:03.584274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.165 [2024-07-10 14:39:03.584310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.165 qpair failed and we were unable to recover it. 00:36:54.165 [2024-07-10 14:39:03.584483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.165 [2024-07-10 14:39:03.584517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.165 qpair failed and we were unable to recover it. 00:36:54.165 [2024-07-10 14:39:03.584707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.165 [2024-07-10 14:39:03.584740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.165 qpair failed and we were unable to recover it. 00:36:54.165 [2024-07-10 14:39:03.584915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.165 [2024-07-10 14:39:03.584947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.165 qpair failed and we were unable to recover it. 00:36:54.165 [2024-07-10 14:39:03.585106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.165 [2024-07-10 14:39:03.585139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.165 qpair failed and we were unable to recover it. 00:36:54.165 [2024-07-10 14:39:03.585288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.165 [2024-07-10 14:39:03.585319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.165 qpair failed and we were unable to recover it. 00:36:54.165 [2024-07-10 14:39:03.585501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.165 [2024-07-10 14:39:03.585536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.165 qpair failed and we were unable to recover it. 00:36:54.165 [2024-07-10 14:39:03.585737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.165 [2024-07-10 14:39:03.585773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.165 qpair failed and we were unable to recover it. 00:36:54.165 [2024-07-10 14:39:03.585945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.165 [2024-07-10 14:39:03.585978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.435 qpair failed and we were unable to recover it. 00:36:54.435 [2024-07-10 14:39:03.586136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.435 [2024-07-10 14:39:03.586169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.586314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.586346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.586489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.586523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.586686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.586729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.586911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.586958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.587160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.587210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.587391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.587448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.587628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.587670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.587870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.587918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.588089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.588133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.588349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.588393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.588598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.588642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.588844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.588892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.589064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.589108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.589315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.589365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.589550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.589595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.589750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.589784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.589942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.589975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.590136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.590169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.590320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.590366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.590528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.590561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.590709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.590741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.590902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.590936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.591112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.591153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.591332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.591364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.591523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.591556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.591701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.591733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.591915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.591947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.592093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.592126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.592318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.592350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.592518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.592556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.592710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.592744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.592917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.592949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.593102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.593136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.593310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.593342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.593491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.593524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.593678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.593710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.593870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.593903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.594061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.594093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.594252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.594285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.436 qpair failed and we were unable to recover it. 00:36:54.436 [2024-07-10 14:39:03.594452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.436 [2024-07-10 14:39:03.594484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.594647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.594679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.594826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.594858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.595025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.595057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.595235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.595267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.595444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.595476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.595641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.595673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.595824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.595856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.595999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.596031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.596181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.596213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.596361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.596395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.596588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.596621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.596773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.596805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.596965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.596998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.597151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.597185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.597338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.597370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.597553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.597585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.597750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.597783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.597942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.597974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.598124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.598158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.598350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.598382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.598555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.598587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.598738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.598770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.598915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.598947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.599125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.599157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.599335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.599367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.599546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.599579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.599726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.599761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.599938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.599970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.600111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.600143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.600331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.600368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.600576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.600609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.600800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.600832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.601001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.601033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.601208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.601240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.601395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.601434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.601591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.601625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.601783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.601815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.601962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.601994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.602152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.602194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.602360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.437 [2024-07-10 14:39:03.602392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.437 qpair failed and we were unable to recover it. 00:36:54.437 [2024-07-10 14:39:03.602543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.602575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.602745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.602777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.602956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.602989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.603143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.603175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.603326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.603359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.603511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.603545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.603691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.603723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.603872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.603904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.604055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.604087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.604286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.604318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.604464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.604496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.604638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.604671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.604813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.604845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.604991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.605023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.605169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.605202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.605348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.605381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.605547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.605580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.605741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.605773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.605974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.606006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.606153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.606185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.606333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.606365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.606509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.606542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.606690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.606723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.606862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.606894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.607069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.607101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.607258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.607290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.607443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.607476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.607634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.607667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.607817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.607849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.607991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.608027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.608207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.608241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.608417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.608455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.608606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.608638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.608812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.608844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.608997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.609031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.609184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.609218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.609369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.609402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.609560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.609592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.609734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.609767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.609936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.438 [2024-07-10 14:39:03.609968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.438 qpair failed and we were unable to recover it. 00:36:54.438 [2024-07-10 14:39:03.610113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.610146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.610292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.610324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.610476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.610509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.610664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.610698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.610843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.610875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.611027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.611059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.611234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.611267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.611434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.611466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.611612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.611644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.611787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.611821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.611993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.612026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.612182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.612214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.612364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.612397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.612585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.612618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.612766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.612800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.612953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.612986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.613160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.613193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.613334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.613365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.613538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.613570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.613724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.613756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.613924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.613956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.614111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.614142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.614372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.614415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.614619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.614651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.614802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.614834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.614984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.615016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.615182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.615214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.615398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.615436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.615617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.615648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:54.439 [2024-07-10 14:39:03.615826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.615857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:54.439 [2024-07-10 14:39:03.616032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.616064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:54.439 [2024-07-10 14:39:03.616215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.616248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.616397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.616436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.616607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.616640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.616795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.616827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.616973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.617009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.617164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.617196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.617349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.617382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.617551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.439 [2024-07-10 14:39:03.617586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.439 qpair failed and we were unable to recover it. 00:36:54.439 [2024-07-10 14:39:03.617743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.617776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.617957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.617990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.618159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.618192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.618400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.618445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.618609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.618641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.618802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.618835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.618977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.619008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.619174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.619207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.619384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.619418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.619602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.619635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.619801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.619833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.619998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.620029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.620182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.620214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.620379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.620411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.620607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.620639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.620852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.620903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.621067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.621102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.621282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.621315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.621484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.621519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.621681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.621723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.621873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.621906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.622054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.622087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.622233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.622275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.622455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.622489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.622637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.622670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.622843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.622876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.623051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.623083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.623246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.623278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.623488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.623531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.623678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.623711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.623886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.623919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.624092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.624125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.624298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.624331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.624485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.440 [2024-07-10 14:39:03.624518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.440 qpair failed and we were unable to recover it. 00:36:54.440 [2024-07-10 14:39:03.624683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.624717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.624871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.624906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.625085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.625117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.625266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.625298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.625456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.625489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.625670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.625702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.625846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.625878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.626027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.626059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.626221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.626254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.626403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.626445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.626622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.626655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.626805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.626837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.627001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.627033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.627210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.627242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.627428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.627461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.627651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.627683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.627854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.627885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.628035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.628066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.628241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.628274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.628445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.628478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.628625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.628657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.628832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.628881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.629042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.629077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.629232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.629266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.629449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.629494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.629699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.629732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.629891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.629924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.630100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.630133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.630305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.630338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.630508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.630542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.630723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.630755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.630929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.630962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.631101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.631133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.631285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.631317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.631463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.631496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.631674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.631706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.631873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.631904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.632084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.632116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.632267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.632298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.632446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.632479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.632635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.441 [2024-07-10 14:39:03.632666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.441 qpair failed and we were unable to recover it. 00:36:54.441 [2024-07-10 14:39:03.632817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.632860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.633042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.633075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.633250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.633282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.633475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.633508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.633657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.633692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.633847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.633880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.634024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.634056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.634210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.634244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.634444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.634492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.634677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.634711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.634873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.634907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.635054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.635087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.635243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.635276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.635448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.635483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.635636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.635669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.635820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.635852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.636003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.636037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.636191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.636224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.636377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.636410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.636563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.636596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.636768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.636805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.636982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.637014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.637184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.637217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.637382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.637414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.637583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.637616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.637784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.637816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.637982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.638015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.638163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.638197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.638350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.638382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.638576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.638610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.638759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.638791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.638968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.639000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.639164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.639196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.639373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.639406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.639584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.639618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.639759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.639791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.639948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.639979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.640151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:54.442 [2024-07-10 14:39:03.640184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 [2024-07-10 14:39:03.640365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.640398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.442 qpair failed and we were unable to recover it. 00:36:54.442 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:54.442 [2024-07-10 14:39:03.640565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.442 [2024-07-10 14:39:03.640599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.443 [2024-07-10 14:39:03.640773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.640806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:54.443 [2024-07-10 14:39:03.640981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.641020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.641193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.641226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.641370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.641402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.641591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.641624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.641773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.641805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.641971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.642004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.642162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.642196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.642357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.642389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.642573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.642605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.642765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.642797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.642943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.642975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.643144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.643177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.643346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.643378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.643545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.643578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.643719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.643757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.643906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.643939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.644115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.644148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.644318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.644354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.644501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.644534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.644710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.644742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.644929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.644962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.645110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.645142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.645321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.645353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.645525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.645558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.645711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.645743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.645943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.645975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.646151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.646183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.646338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.646370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.646531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.646564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.646718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.646750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.646897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.646929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.647092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.647152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.647324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.647358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.647529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.647562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.647737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.647784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.647960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.648007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.648170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.648203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.648353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.648385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.443 qpair failed and we were unable to recover it. 00:36:54.443 [2024-07-10 14:39:03.648567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.443 [2024-07-10 14:39:03.648600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.648759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.648794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.648945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.648992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.649172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.649205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.649375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.649407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.649603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.649636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.649789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.649821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.650002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.650034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.650187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.650219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.650373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.650405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.650585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.650617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.650781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.650813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.650960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.650991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.651190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.651223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.651371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.651403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.651591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.651623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.651797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.651829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.651987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.652020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.652208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.652247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.652418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.652462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.652624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.652657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.652816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.652849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.653025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.653057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.653203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.653235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.653389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.653421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.653583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.653615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.653793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.653825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.653973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.654005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.654252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.654284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.654488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.654522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.654680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.654714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.654974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.655007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.655174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.655206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.655374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.655407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.655585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.655617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.655793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.444 [2024-07-10 14:39:03.655825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.444 qpair failed and we were unable to recover it. 00:36:54.444 [2024-07-10 14:39:03.655967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.656000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.656161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.656194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.656343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.656376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.656548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.656580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.656735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.656775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.656953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.656985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.657159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.657191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.657334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.657366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.657513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.657545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.657696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.657728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.657903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.657935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.658099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.658131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.658289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.658322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.658484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.658516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.658678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.658711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.658878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.658912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.659096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.659128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.659327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.659360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.659506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.659539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.659707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.659750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.659903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.659936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.660108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.660160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.660315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.660347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.660504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.660541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.660699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.660740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.660916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.660949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.661109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.661141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.661303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.661335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.661510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.661543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.661695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.661740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.661896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.661928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.662088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.662120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.662294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.662326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.662502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.662535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.662690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.662723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.662879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.662912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.663091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.663124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.663283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.663315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.663480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.663513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.663701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.663733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.445 [2024-07-10 14:39:03.663887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.445 [2024-07-10 14:39:03.663919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.445 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.664092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.664124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.664318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.664350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.664503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.664535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.664688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.664720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.664874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.664908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.665071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.665104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.665276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.665309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.665464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.665500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.665647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.665681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.665875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.665909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.666058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.666090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.666251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.666284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.666461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.666496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.666666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.666698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.666920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.666952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.667099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.667131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.667292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.667325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.667501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.667534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.667686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.667718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.667890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.667923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.668077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.668109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.668290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.668322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.668487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.668524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.668671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.668703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.668861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.668894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.669046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.669079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.669226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.669260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.669434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.669467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.669647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.669680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.669836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.669868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.670017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.670049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.670220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.670252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.670403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.670443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.670603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.670635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.670812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.670844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.671021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.671054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.671223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.671256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.671407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.671446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.671630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.671663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.671838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.446 [2024-07-10 14:39:03.671871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.446 qpair failed and we were unable to recover it. 00:36:54.446 [2024-07-10 14:39:03.672055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.672088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.672234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.672267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.672413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.672453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.672609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.672643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.672818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.672862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.673057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.673089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.673246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.673279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.673423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.673462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.673635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.673668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.673842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.673875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.674023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.674056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.674234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.674267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.674458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.674491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.674673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.674705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.674880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.674912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.675069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.675102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.675245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.675277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.675431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.675464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.675645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.675677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.675857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.675889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.676034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.676067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.676207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.676240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.676439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.676476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.676649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.676682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.676886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.676918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.677082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.677114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.677265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.677297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.677486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.677518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.677670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.677702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.677849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.677888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.678044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.678077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.678225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.678257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.678414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.678453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.678612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.678644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.678812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.678844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.679021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.679054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.679237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.679270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.679414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.679453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.679606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.679639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.679788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.679821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.447 qpair failed and we were unable to recover it. 00:36:54.447 [2024-07-10 14:39:03.679977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.447 [2024-07-10 14:39:03.680009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.680176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.680209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.680373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.680405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.680576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.680608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.680785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.680817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.680996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.681030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.681214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.681246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.681423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.681461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.681614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.681647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.681800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.681832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.681978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.682011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.682184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.682216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.682391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.682436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.682579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.682612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.682767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.682799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.682945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.682979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.683127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.683160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.683313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.683345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.683489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.683522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.683701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.683734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.683880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.683913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.684065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.684099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.684252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.684286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.684475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.684508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.684690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.684723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.684875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.684907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.685076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.685108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.685287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.685319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.685491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.685534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.685696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.685728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.685902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.685934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.686088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.686121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.686298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.686331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.686485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.686518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.686678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.448 [2024-07-10 14:39:03.686710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.448 qpair failed and we were unable to recover it. 00:36:54.448 [2024-07-10 14:39:03.686876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.686908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.687055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.687087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.687240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.687272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.687436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.687469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.687625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.687658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.687800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.687833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.687981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.688013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.688168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.688202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.688405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.688451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.688600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.688633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.688782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.688814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.689000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.689033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.689185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.689216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.689368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.689400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.689589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.689626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.689769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.689801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.689967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.689999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.690170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.690202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.690349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.690383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.690579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.690612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.690768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.690800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.690948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.690980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.691141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.691173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.691354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.691386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.691548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.691581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.691759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.691791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.691982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.692015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.692169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.692202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.692358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.692390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.692575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.692608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.692792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.692824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.692968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.693000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.693151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.693184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.693334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.693367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.693517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.693550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.693699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.693731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.693881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.693913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.694075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.694107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.694284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.694316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.694498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.694530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.694681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.449 [2024-07-10 14:39:03.694714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.449 qpair failed and we were unable to recover it. 00:36:54.449 [2024-07-10 14:39:03.694899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.694932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.695073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.695105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.695275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.695308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.695452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.695484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.695626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.695658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.695847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.695879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.696030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.696064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.696228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.696261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.696470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.696503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.696661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.696692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.696836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.696868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.697047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.697079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.697232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.697264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.697417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.697458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.697626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.697659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.697831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.697863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.698010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.698053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.698224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.698256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.698404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.698443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.698603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.698635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.698840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.698872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.699016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.699048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.699191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.699224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.699367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.699399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.699569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.699602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.699745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.699778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.699949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.699981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.700132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.700164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.700336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.700368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.700547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.700580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.700731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.700764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.700913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.700945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.701088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.701121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.701282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.701314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.701468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.701501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.701683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.701715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.701905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.701937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.702085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.702117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.702263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.702295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.702474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.702507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.450 [2024-07-10 14:39:03.702688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.450 [2024-07-10 14:39:03.702728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.450 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.702880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.702913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.703085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.703118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.703268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.703300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.703457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.703489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.703661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.703693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.703860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.703891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.704065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.704097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.704241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.704273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.704471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.704503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.704654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.704686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.704845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.704878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.705055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.705088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.705295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.705331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.705505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.705539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.705699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.705732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.705893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.705926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.706104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.706137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.706289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.706321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.706472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.706505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.706653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.706687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.706846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.706878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.707069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.707102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.707251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.707283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.707437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.707469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.707643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.707674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.707854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.707886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.708036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.708068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.708216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.708248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.708401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.708438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.708650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.708682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.708826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.708859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.709009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.709042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.709192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.709225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.709403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.709444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.709623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.709655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.709829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.709861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.710018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.710051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.710231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.710263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.710441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.710474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.710655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.710697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.451 [2024-07-10 14:39:03.710859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.451 [2024-07-10 14:39:03.710891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.451 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.711047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.711079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.711228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.711260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.711454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.711490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.711641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.711673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.711836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.711868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.712059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.712091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.712294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.712327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.712495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.712528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.712683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.712721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.712867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.712900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.713074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.713106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.713268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.713303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.713450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.713486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.713635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.713667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.713840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.713872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.714041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.714073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.714238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.714270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.714413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.714462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.714638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.714671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.714877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.714909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.715051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.715083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.715231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.715264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.715411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.715461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.715601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.715633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.715789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.715821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.715998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.716030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.716199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.716231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.716403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.716442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.716632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.716664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.716813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.716846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.716992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.717024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.717199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.717231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.717403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.717441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.717594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.717626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.717765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.717797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.717973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.718005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.718152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.718184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.718335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.718367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.718532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.718565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.718711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.718743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.452 qpair failed and we were unable to recover it. 00:36:54.452 [2024-07-10 14:39:03.718924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.452 [2024-07-10 14:39:03.718956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.719099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.719132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.719280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.719312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.719491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.719523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.719667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.719706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.719882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.719914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.720085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.720118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.720259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.720291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.720439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.720474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.720651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.720683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.720866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.720900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.721055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.721092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.721245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.721277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.721422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.721460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.721638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.721670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.721857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.721889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.722042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.722074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.722235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.722267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 Malloc0 00:36:54.453 [2024-07-10 14:39:03.722415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.722458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.722662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.722695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.453 [2024-07-10 14:39:03.722880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.722912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:54.453 [2024-07-10 14:39:03.723060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.453 [2024-07-10 14:39:03.723093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:54.453 [2024-07-10 14:39:03.723244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.723286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.723448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.723481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.723635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.723668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.723810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.723842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.724014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.724046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.724199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.724231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.724369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.724400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.724552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.724584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.724758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.724790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.724942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.724974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.725154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.725186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.453 [2024-07-10 14:39:03.725333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.453 [2024-07-10 14:39:03.725367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.453 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.725546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.725578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.725732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.725764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.725931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.725967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.726078] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:54.454 [2024-07-10 14:39:03.726122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.726154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.726303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.726334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.726488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.726519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.726673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.726705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.726875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.726906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.727080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.727112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.727272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.727304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.727471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.727505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.727661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.727695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.727851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.727889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.728055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.728087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.728233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.728265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.728410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.728452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.728612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.728643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.728816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.728848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.729010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.729043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.729219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.729251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.729429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.729462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.729638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.729670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.729845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.729877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.730030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.730062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.730256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.730288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.730436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.730468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.730612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.730644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.730798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.730830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.730987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.731019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.731177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.731209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.731360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.731392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.731551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.731584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.731757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.731789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.731951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.731983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.732139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.732172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.732349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.732381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.732550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.732585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.732748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.732781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.732955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.732987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.733166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.454 [2024-07-10 14:39:03.733198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.454 qpair failed and we were unable to recover it. 00:36:54.454 [2024-07-10 14:39:03.733379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.733411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.733582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.733614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.733769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.733802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.733974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.734006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.734155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.734188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.734335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.734368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.734529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.734563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.734728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.734761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.734913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.734945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.735117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.735149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.735355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.735388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.735543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.735576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.735750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.455 [2024-07-10 14:39:03.735794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:54.455 [2024-07-10 14:39:03.735939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.735971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.455 [2024-07-10 14:39:03.736140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.736172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.736328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.736360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.736542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.736575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.736723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.736755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.736903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.736936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.737099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.737131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.737297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.737330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.737488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.737521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.737665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.737697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.737863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.737895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.738047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.738079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.738233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.738265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.738418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.738455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.738620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.738652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.738804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.738837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.739013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.739045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.739205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.739238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.739401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.739460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.739618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.739650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.739801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.739833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.739985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.740017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.740169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.740201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.740362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.740394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.740554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.740586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.740733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.455 [2024-07-10 14:39:03.740766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.455 qpair failed and we were unable to recover it. 00:36:54.455 [2024-07-10 14:39:03.740916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.740947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.741090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.741127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.741319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.741351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.741502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.741535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.741688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.741720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.741872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.741904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.742043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.742075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.742217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.742249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.742391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.742422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.742577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.742609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.742786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.742818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.742962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.742993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.743173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.743205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.743355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.743388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.743556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.743589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.743754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.456 [2024-07-10 14:39:03.743786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:54.456 [2024-07-10 14:39:03.743941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.743975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.456 [2024-07-10 14:39:03.744111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.744144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:54.456 [2024-07-10 14:39:03.744296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.744328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.744496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.744528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.744678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.744711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.744883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.744916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.745082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.745114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.745263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.745296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.745475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.745508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.745666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.745698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.745846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.745886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.746032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.746064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.746240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.746272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.746421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.746459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.746605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.746637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.746785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.746817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.746977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.747009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.747160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.747192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.747361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.747393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.747579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.747611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.747768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.747800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.747945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.748008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.748180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.748212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.456 qpair failed and we were unable to recover it. 00:36:54.456 [2024-07-10 14:39:03.748383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.456 [2024-07-10 14:39:03.748415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.748576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.748608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.748760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.748791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.748945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.748977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.749121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.749153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.749312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.749344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.749487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.749519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.749675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.749707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.749903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.749935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.750093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.750127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.750279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.750311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.750488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.750520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.750672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.750704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.750842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.750874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.751028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.751060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.751216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.751248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.751447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.751480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.751629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.751662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.457 [2024-07-10 14:39:03.751819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.751852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:54.457 [2024-07-10 14:39:03.752026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.752058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.457 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:54.457 [2024-07-10 14:39:03.752238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.752272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.752454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.752486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.752672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.752704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.752856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.752889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.753056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.753088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.753259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.753291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.753441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.753474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.753630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.753662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.753821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.753856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.754022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.754055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.754235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.754267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.754419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.754457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.754619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.754651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.754821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.754854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.457 qpair failed and we were unable to recover it. 00:36:54.457 [2024-07-10 14:39:03.755038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.457 [2024-07-10 14:39:03.755071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.458 qpair failed and we were unable to recover it. 00:36:54.458 [2024-07-10 14:39:03.755251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.458 [2024-07-10 14:39:03.755284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.458 qpair failed and we were unable to recover it. 00:36:54.458 [2024-07-10 14:39:03.755441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.458 [2024-07-10 14:39:03.755477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.458 qpair failed and we were unable to recover it. 00:36:54.458 [2024-07-10 14:39:03.755634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.458 [2024-07-10 14:39:03.755667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.458 qpair failed and we were unable to recover it. 00:36:54.458 [2024-07-10 14:39:03.755818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.458 [2024-07-10 14:39:03.755850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:36:54.458 qpair failed and we were unable to recover it. 00:36:54.458 [2024-07-10 14:39:03.755937] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:54.458 [2024-07-10 14:39:03.757883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.458 [2024-07-10 14:39:03.758116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.458 [2024-07-10 14:39:03.758153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.458 [2024-07-10 14:39:03.758180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.458 [2024-07-10 14:39:03.758201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.458 [2024-07-10 14:39:03.758260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.458 qpair failed and we were unable to recover it. 00:36:54.458 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.458 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:54.458 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.458 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:54.458 [2024-07-10 14:39:03.767486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.458 [2024-07-10 14:39:03.767660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.458 [2024-07-10 14:39:03.767699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.458 [2024-07-10 14:39:03.767723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.458 [2024-07-10 14:39:03.767742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.458 [2024-07-10 14:39:03.767791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.458 qpair failed and we were unable to recover it. 00:36:54.458 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.458 14:39:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1554885 00:36:54.458 [2024-07-10 14:39:03.777469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.458 [2024-07-10 14:39:03.777648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.458 [2024-07-10 14:39:03.777682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.458 [2024-07-10 14:39:03.777705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.458 [2024-07-10 14:39:03.777725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.458 [2024-07-10 14:39:03.777767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.458 qpair failed and we were unable to recover it. 00:36:54.458 [2024-07-10 14:39:03.787551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.458 [2024-07-10 14:39:03.787772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.458 [2024-07-10 14:39:03.787804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.458 [2024-07-10 14:39:03.787833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.458 [2024-07-10 14:39:03.787853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.458 [2024-07-10 14:39:03.787894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.458 qpair failed and we were unable to recover it. 00:36:54.458 [2024-07-10 14:39:03.797490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.458 [2024-07-10 14:39:03.797661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.458 [2024-07-10 14:39:03.797694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.458 [2024-07-10 14:39:03.797717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.458 [2024-07-10 14:39:03.797736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.458 [2024-07-10 14:39:03.797777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.458 qpair failed and we were unable to recover it. 00:36:54.458 [2024-07-10 14:39:03.807544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.458 [2024-07-10 14:39:03.807708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.458 [2024-07-10 14:39:03.807741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.458 [2024-07-10 14:39:03.807764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.458 [2024-07-10 14:39:03.807783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.458 [2024-07-10 14:39:03.807824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.458 qpair failed and we were unable to recover it. 00:36:54.458 [2024-07-10 14:39:03.817558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.458 [2024-07-10 14:39:03.817743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.458 [2024-07-10 14:39:03.817776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.458 [2024-07-10 14:39:03.817799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.458 [2024-07-10 14:39:03.817817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.458 [2024-07-10 14:39:03.817858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.458 qpair failed and we were unable to recover it. 00:36:54.458 [2024-07-10 14:39:03.827578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.458 [2024-07-10 14:39:03.827762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.458 [2024-07-10 14:39:03.827795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.458 [2024-07-10 14:39:03.827817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.458 [2024-07-10 14:39:03.827836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.458 [2024-07-10 14:39:03.827878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.458 qpair failed and we were unable to recover it. 00:36:54.458 [2024-07-10 14:39:03.837576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.458 [2024-07-10 14:39:03.837745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.458 [2024-07-10 14:39:03.837779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.458 [2024-07-10 14:39:03.837801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.458 [2024-07-10 14:39:03.837820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.458 [2024-07-10 14:39:03.837860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.458 qpair failed and we were unable to recover it. 00:36:54.458 [2024-07-10 14:39:03.847639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.458 [2024-07-10 14:39:03.847808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.458 [2024-07-10 14:39:03.847841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.458 [2024-07-10 14:39:03.847877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.458 [2024-07-10 14:39:03.847896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.458 [2024-07-10 14:39:03.847938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.458 qpair failed and we were unable to recover it. 00:36:54.458 [2024-07-10 14:39:03.857658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.458 [2024-07-10 14:39:03.857835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.458 [2024-07-10 14:39:03.857869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.458 [2024-07-10 14:39:03.857892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.458 [2024-07-10 14:39:03.857910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.458 [2024-07-10 14:39:03.857952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.458 qpair failed and we were unable to recover it. 00:36:54.458 [2024-07-10 14:39:03.867687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.458 [2024-07-10 14:39:03.867859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.458 [2024-07-10 14:39:03.867891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.458 [2024-07-10 14:39:03.867914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.459 [2024-07-10 14:39:03.867932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.459 [2024-07-10 14:39:03.867973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.459 qpair failed and we were unable to recover it. 00:36:54.459 [2024-07-10 14:39:03.877747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.459 [2024-07-10 14:39:03.877912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.459 [2024-07-10 14:39:03.877951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.459 [2024-07-10 14:39:03.877975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.459 [2024-07-10 14:39:03.877994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.459 [2024-07-10 14:39:03.878035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.459 qpair failed and we were unable to recover it. 00:36:54.459 [2024-07-10 14:39:03.887866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.459 [2024-07-10 14:39:03.888034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.459 [2024-07-10 14:39:03.888067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.459 [2024-07-10 14:39:03.888090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.459 [2024-07-10 14:39:03.888108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.459 [2024-07-10 14:39:03.888149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.459 qpair failed and we were unable to recover it. 00:36:54.459 [2024-07-10 14:39:03.897809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.459 [2024-07-10 14:39:03.897972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.459 [2024-07-10 14:39:03.898005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.459 [2024-07-10 14:39:03.898028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.459 [2024-07-10 14:39:03.898046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.459 [2024-07-10 14:39:03.898087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.459 qpair failed and we were unable to recover it. 00:36:54.718 [2024-07-10 14:39:03.907837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.718 [2024-07-10 14:39:03.908019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.718 [2024-07-10 14:39:03.908055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.718 [2024-07-10 14:39:03.908079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.718 [2024-07-10 14:39:03.908097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.718 [2024-07-10 14:39:03.908140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.718 qpair failed and we were unable to recover it. 00:36:54.718 [2024-07-10 14:39:03.917859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.718 [2024-07-10 14:39:03.918040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.718 [2024-07-10 14:39:03.918074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.718 [2024-07-10 14:39:03.918098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.718 [2024-07-10 14:39:03.918117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.718 [2024-07-10 14:39:03.918164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.718 qpair failed and we were unable to recover it. 00:36:54.718 [2024-07-10 14:39:03.927885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.718 [2024-07-10 14:39:03.928063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.718 [2024-07-10 14:39:03.928097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.718 [2024-07-10 14:39:03.928122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.718 [2024-07-10 14:39:03.928142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.718 [2024-07-10 14:39:03.928185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.718 qpair failed and we were unable to recover it. 00:36:54.718 [2024-07-10 14:39:03.937897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.718 [2024-07-10 14:39:03.938077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.718 [2024-07-10 14:39:03.938110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.718 [2024-07-10 14:39:03.938132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.718 [2024-07-10 14:39:03.938151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.718 [2024-07-10 14:39:03.938192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.718 qpair failed and we were unable to recover it. 00:36:54.718 [2024-07-10 14:39:03.947961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.718 [2024-07-10 14:39:03.948140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.718 [2024-07-10 14:39:03.948175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.718 [2024-07-10 14:39:03.948198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.718 [2024-07-10 14:39:03.948217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.718 [2024-07-10 14:39:03.948259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.718 qpair failed and we were unable to recover it. 00:36:54.718 [2024-07-10 14:39:03.958087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.718 [2024-07-10 14:39:03.958263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.718 [2024-07-10 14:39:03.958301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.718 [2024-07-10 14:39:03.958325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.718 [2024-07-10 14:39:03.958344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.718 [2024-07-10 14:39:03.958387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.718 qpair failed and we were unable to recover it. 00:36:54.718 [2024-07-10 14:39:03.968022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.718 [2024-07-10 14:39:03.968193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.718 [2024-07-10 14:39:03.968238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.718 [2024-07-10 14:39:03.968262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.718 [2024-07-10 14:39:03.968281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.718 [2024-07-10 14:39:03.968322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.718 qpair failed and we were unable to recover it. 00:36:54.718 [2024-07-10 14:39:03.977974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.718 [2024-07-10 14:39:03.978137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.718 [2024-07-10 14:39:03.978171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.718 [2024-07-10 14:39:03.978193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.718 [2024-07-10 14:39:03.978212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.718 [2024-07-10 14:39:03.978252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.718 qpair failed and we were unable to recover it. 00:36:54.718 [2024-07-10 14:39:03.988137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.718 [2024-07-10 14:39:03.988310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.718 [2024-07-10 14:39:03.988343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.718 [2024-07-10 14:39:03.988367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.718 [2024-07-10 14:39:03.988385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.718 [2024-07-10 14:39:03.988432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.718 qpair failed and we were unable to recover it. 00:36:54.719 [2024-07-10 14:39:03.998037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.719 [2024-07-10 14:39:03.998223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.719 [2024-07-10 14:39:03.998256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.719 [2024-07-10 14:39:03.998279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.719 [2024-07-10 14:39:03.998298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.719 [2024-07-10 14:39:03.998338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.719 qpair failed and we were unable to recover it. 00:36:54.719 [2024-07-10 14:39:04.008121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.719 [2024-07-10 14:39:04.008343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.719 [2024-07-10 14:39:04.008376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.719 [2024-07-10 14:39:04.008399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.719 [2024-07-10 14:39:04.008423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.719 [2024-07-10 14:39:04.008477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.719 qpair failed and we were unable to recover it. 00:36:54.719 [2024-07-10 14:39:04.018181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.719 [2024-07-10 14:39:04.018382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.719 [2024-07-10 14:39:04.018417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.719 [2024-07-10 14:39:04.018458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.719 [2024-07-10 14:39:04.018479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.719 [2024-07-10 14:39:04.018521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.719 qpair failed and we were unable to recover it. 00:36:54.719 [2024-07-10 14:39:04.028165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.719 [2024-07-10 14:39:04.028352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.719 [2024-07-10 14:39:04.028386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.719 [2024-07-10 14:39:04.028408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.719 [2024-07-10 14:39:04.028438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.719 [2024-07-10 14:39:04.028482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.719 qpair failed and we were unable to recover it. 00:36:54.719 [2024-07-10 14:39:04.038210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.719 [2024-07-10 14:39:04.038413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.719 [2024-07-10 14:39:04.038455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.719 [2024-07-10 14:39:04.038479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.719 [2024-07-10 14:39:04.038499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.719 [2024-07-10 14:39:04.038540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.719 qpair failed and we were unable to recover it. 00:36:54.719 [2024-07-10 14:39:04.048255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.719 [2024-07-10 14:39:04.048423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.719 [2024-07-10 14:39:04.048464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.719 [2024-07-10 14:39:04.048487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.719 [2024-07-10 14:39:04.048505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.719 [2024-07-10 14:39:04.048547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.719 qpair failed and we were unable to recover it. 00:36:54.719 [2024-07-10 14:39:04.058209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.719 [2024-07-10 14:39:04.058405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.719 [2024-07-10 14:39:04.058448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.719 [2024-07-10 14:39:04.058475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.719 [2024-07-10 14:39:04.058494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.719 [2024-07-10 14:39:04.058534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.719 qpair failed and we were unable to recover it. 00:36:54.719 [2024-07-10 14:39:04.068282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.719 [2024-07-10 14:39:04.068506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.719 [2024-07-10 14:39:04.068541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.719 [2024-07-10 14:39:04.068563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.719 [2024-07-10 14:39:04.068582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.719 [2024-07-10 14:39:04.068623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.719 qpair failed and we were unable to recover it. 00:36:54.719 [2024-07-10 14:39:04.078260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.719 [2024-07-10 14:39:04.078459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.719 [2024-07-10 14:39:04.078492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.719 [2024-07-10 14:39:04.078515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.719 [2024-07-10 14:39:04.078534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.719 [2024-07-10 14:39:04.078576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.719 qpair failed and we were unable to recover it. 00:36:54.719 [2024-07-10 14:39:04.088349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.719 [2024-07-10 14:39:04.088531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.719 [2024-07-10 14:39:04.088565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.719 [2024-07-10 14:39:04.088587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.719 [2024-07-10 14:39:04.088606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.719 [2024-07-10 14:39:04.088647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.719 qpair failed and we were unable to recover it. 00:36:54.719 [2024-07-10 14:39:04.098355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.719 [2024-07-10 14:39:04.098532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.719 [2024-07-10 14:39:04.098565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.719 [2024-07-10 14:39:04.098588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.719 [2024-07-10 14:39:04.098614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.719 [2024-07-10 14:39:04.098656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.719 qpair failed and we were unable to recover it. 00:36:54.719 [2024-07-10 14:39:04.108349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.719 [2024-07-10 14:39:04.108533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.719 [2024-07-10 14:39:04.108567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.719 [2024-07-10 14:39:04.108589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.719 [2024-07-10 14:39:04.108608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.719 [2024-07-10 14:39:04.108649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.719 qpair failed and we were unable to recover it. 00:36:54.719 [2024-07-10 14:39:04.118418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.719 [2024-07-10 14:39:04.118609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.719 [2024-07-10 14:39:04.118641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.719 [2024-07-10 14:39:04.118664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.719 [2024-07-10 14:39:04.118683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.719 [2024-07-10 14:39:04.118732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.719 qpair failed and we were unable to recover it. 00:36:54.719 [2024-07-10 14:39:04.128495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.719 [2024-07-10 14:39:04.128661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.719 [2024-07-10 14:39:04.128694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.719 [2024-07-10 14:39:04.128717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.719 [2024-07-10 14:39:04.128747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.720 [2024-07-10 14:39:04.128789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.720 qpair failed and we were unable to recover it. 00:36:54.720 [2024-07-10 14:39:04.138449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.720 [2024-07-10 14:39:04.138620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.720 [2024-07-10 14:39:04.138654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.720 [2024-07-10 14:39:04.138677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.720 [2024-07-10 14:39:04.138695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.720 [2024-07-10 14:39:04.138737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.720 qpair failed and we were unable to recover it. 00:36:54.720 [2024-07-10 14:39:04.148522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.720 [2024-07-10 14:39:04.148699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.720 [2024-07-10 14:39:04.148732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.720 [2024-07-10 14:39:04.148755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.720 [2024-07-10 14:39:04.148774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.720 [2024-07-10 14:39:04.148816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.720 qpair failed and we were unable to recover it. 00:36:54.720 [2024-07-10 14:39:04.158529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.720 [2024-07-10 14:39:04.158692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.720 [2024-07-10 14:39:04.158734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.720 [2024-07-10 14:39:04.158756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.720 [2024-07-10 14:39:04.158775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.720 [2024-07-10 14:39:04.158821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.720 qpair failed and we were unable to recover it. 00:36:54.720 [2024-07-10 14:39:04.168582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.720 [2024-07-10 14:39:04.168747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.720 [2024-07-10 14:39:04.168780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.720 [2024-07-10 14:39:04.168802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.720 [2024-07-10 14:39:04.168820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.720 [2024-07-10 14:39:04.168861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.720 qpair failed and we were unable to recover it. 00:36:54.720 [2024-07-10 14:39:04.178618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.720 [2024-07-10 14:39:04.178789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.720 [2024-07-10 14:39:04.178822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.720 [2024-07-10 14:39:04.178844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.720 [2024-07-10 14:39:04.178863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.720 [2024-07-10 14:39:04.178903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.720 qpair failed and we were unable to recover it. 00:36:54.720 [2024-07-10 14:39:04.188614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.720 [2024-07-10 14:39:04.188805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.720 [2024-07-10 14:39:04.188838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.720 [2024-07-10 14:39:04.188866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.720 [2024-07-10 14:39:04.188886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.720 [2024-07-10 14:39:04.188926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.720 qpair failed and we were unable to recover it. 00:36:54.979 [2024-07-10 14:39:04.198693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.979 [2024-07-10 14:39:04.198861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.979 [2024-07-10 14:39:04.198896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.979 [2024-07-10 14:39:04.198919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.979 [2024-07-10 14:39:04.198938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.979 [2024-07-10 14:39:04.198981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.979 qpair failed and we were unable to recover it. 00:36:54.979 [2024-07-10 14:39:04.208711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.979 [2024-07-10 14:39:04.208891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.980 [2024-07-10 14:39:04.208926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.980 [2024-07-10 14:39:04.208950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.980 [2024-07-10 14:39:04.208968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.980 [2024-07-10 14:39:04.209011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.980 qpair failed and we were unable to recover it. 00:36:54.980 [2024-07-10 14:39:04.218789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.980 [2024-07-10 14:39:04.218976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.980 [2024-07-10 14:39:04.219009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.980 [2024-07-10 14:39:04.219032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.980 [2024-07-10 14:39:04.219050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.980 [2024-07-10 14:39:04.219091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.980 qpair failed and we were unable to recover it. 00:36:54.980 [2024-07-10 14:39:04.228737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.980 [2024-07-10 14:39:04.228920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.980 [2024-07-10 14:39:04.228953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.980 [2024-07-10 14:39:04.228976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.980 [2024-07-10 14:39:04.228995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.980 [2024-07-10 14:39:04.229036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.980 qpair failed and we were unable to recover it. 00:36:54.980 [2024-07-10 14:39:04.238750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.980 [2024-07-10 14:39:04.238975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.980 [2024-07-10 14:39:04.239007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.980 [2024-07-10 14:39:04.239029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.980 [2024-07-10 14:39:04.239049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.980 [2024-07-10 14:39:04.239090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.980 qpair failed and we were unable to recover it. 00:36:54.980 [2024-07-10 14:39:04.248798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.980 [2024-07-10 14:39:04.248996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.980 [2024-07-10 14:39:04.249032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.980 [2024-07-10 14:39:04.249058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.980 [2024-07-10 14:39:04.249078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.980 [2024-07-10 14:39:04.249119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.980 qpair failed and we were unable to recover it. 00:36:54.980 [2024-07-10 14:39:04.258853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.980 [2024-07-10 14:39:04.259021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.980 [2024-07-10 14:39:04.259054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.980 [2024-07-10 14:39:04.259077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.980 [2024-07-10 14:39:04.259096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.980 [2024-07-10 14:39:04.259137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.980 qpair failed and we were unable to recover it. 00:36:54.980 [2024-07-10 14:39:04.268910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.980 [2024-07-10 14:39:04.269077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.980 [2024-07-10 14:39:04.269111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.980 [2024-07-10 14:39:04.269133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.980 [2024-07-10 14:39:04.269151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.980 [2024-07-10 14:39:04.269192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.980 qpair failed and we were unable to recover it. 00:36:54.980 [2024-07-10 14:39:04.278879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.980 [2024-07-10 14:39:04.279052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.980 [2024-07-10 14:39:04.279090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.980 [2024-07-10 14:39:04.279114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.980 [2024-07-10 14:39:04.279133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.980 [2024-07-10 14:39:04.279173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.980 qpair failed and we were unable to recover it. 00:36:54.980 [2024-07-10 14:39:04.288923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.980 [2024-07-10 14:39:04.289087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.980 [2024-07-10 14:39:04.289121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.980 [2024-07-10 14:39:04.289143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.980 [2024-07-10 14:39:04.289162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.980 [2024-07-10 14:39:04.289203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.980 qpair failed and we were unable to recover it. 00:36:54.980 [2024-07-10 14:39:04.298962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.980 [2024-07-10 14:39:04.299175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.980 [2024-07-10 14:39:04.299208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.980 [2024-07-10 14:39:04.299231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.980 [2024-07-10 14:39:04.299250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.980 [2024-07-10 14:39:04.299290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.980 qpair failed and we were unable to recover it. 00:36:54.980 [2024-07-10 14:39:04.308987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.980 [2024-07-10 14:39:04.309155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.980 [2024-07-10 14:39:04.309188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.980 [2024-07-10 14:39:04.309211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.980 [2024-07-10 14:39:04.309229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.980 [2024-07-10 14:39:04.309270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.980 qpair failed and we were unable to recover it. 00:36:54.980 [2024-07-10 14:39:04.318951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.980 [2024-07-10 14:39:04.319114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.980 [2024-07-10 14:39:04.319147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.980 [2024-07-10 14:39:04.319169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.980 [2024-07-10 14:39:04.319188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.980 [2024-07-10 14:39:04.319234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.980 qpair failed and we were unable to recover it. 00:36:54.980 [2024-07-10 14:39:04.329069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.980 [2024-07-10 14:39:04.329239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.980 [2024-07-10 14:39:04.329272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.980 [2024-07-10 14:39:04.329294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.980 [2024-07-10 14:39:04.329314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.980 [2024-07-10 14:39:04.329354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.980 qpair failed and we were unable to recover it. 00:36:54.980 [2024-07-10 14:39:04.339034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.981 [2024-07-10 14:39:04.339203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.981 [2024-07-10 14:39:04.339236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.981 [2024-07-10 14:39:04.339258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.981 [2024-07-10 14:39:04.339277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.981 [2024-07-10 14:39:04.339318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.981 qpair failed and we were unable to recover it. 00:36:54.981 [2024-07-10 14:39:04.349115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.981 [2024-07-10 14:39:04.349332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.981 [2024-07-10 14:39:04.349366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.981 [2024-07-10 14:39:04.349389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.981 [2024-07-10 14:39:04.349408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.981 [2024-07-10 14:39:04.349457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.981 qpair failed and we were unable to recover it. 00:36:54.981 [2024-07-10 14:39:04.359107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.981 [2024-07-10 14:39:04.359278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.981 [2024-07-10 14:39:04.359334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.981 [2024-07-10 14:39:04.359357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.981 [2024-07-10 14:39:04.359388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.981 [2024-07-10 14:39:04.359437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.981 qpair failed and we were unable to recover it. 00:36:54.981 [2024-07-10 14:39:04.369196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.981 [2024-07-10 14:39:04.369365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.981 [2024-07-10 14:39:04.369404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.981 [2024-07-10 14:39:04.369438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.981 [2024-07-10 14:39:04.369460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.981 [2024-07-10 14:39:04.369501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.981 qpair failed and we were unable to recover it. 00:36:54.981 [2024-07-10 14:39:04.379131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.981 [2024-07-10 14:39:04.379291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.981 [2024-07-10 14:39:04.379324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.981 [2024-07-10 14:39:04.379346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.981 [2024-07-10 14:39:04.379364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.981 [2024-07-10 14:39:04.379405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.981 qpair failed and we were unable to recover it. 00:36:54.981 [2024-07-10 14:39:04.389227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.981 [2024-07-10 14:39:04.389438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.981 [2024-07-10 14:39:04.389472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.981 [2024-07-10 14:39:04.389495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.981 [2024-07-10 14:39:04.389514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.981 [2024-07-10 14:39:04.389554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.981 qpair failed and we were unable to recover it. 00:36:54.981 [2024-07-10 14:39:04.399176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.981 [2024-07-10 14:39:04.399348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.981 [2024-07-10 14:39:04.399380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.981 [2024-07-10 14:39:04.399403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.981 [2024-07-10 14:39:04.399422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.981 [2024-07-10 14:39:04.399474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.981 qpair failed and we were unable to recover it. 00:36:54.981 [2024-07-10 14:39:04.409276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.981 [2024-07-10 14:39:04.409475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.981 [2024-07-10 14:39:04.409508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.981 [2024-07-10 14:39:04.409530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.981 [2024-07-10 14:39:04.409555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.981 [2024-07-10 14:39:04.409597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.981 qpair failed and we were unable to recover it. 00:36:54.981 [2024-07-10 14:39:04.419307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.981 [2024-07-10 14:39:04.419515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.981 [2024-07-10 14:39:04.419548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.981 [2024-07-10 14:39:04.419571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.981 [2024-07-10 14:39:04.419589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.981 [2024-07-10 14:39:04.419631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.981 qpair failed and we were unable to recover it. 00:36:54.981 [2024-07-10 14:39:04.429278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.981 [2024-07-10 14:39:04.429465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.981 [2024-07-10 14:39:04.429498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.981 [2024-07-10 14:39:04.429521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.981 [2024-07-10 14:39:04.429540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.981 [2024-07-10 14:39:04.429580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.981 qpair failed and we were unable to recover it. 00:36:54.981 [2024-07-10 14:39:04.439351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.981 [2024-07-10 14:39:04.439537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.981 [2024-07-10 14:39:04.439571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.981 [2024-07-10 14:39:04.439593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.981 [2024-07-10 14:39:04.439612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.981 [2024-07-10 14:39:04.439653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.981 qpair failed and we were unable to recover it. 00:36:54.981 [2024-07-10 14:39:04.449354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.981 [2024-07-10 14:39:04.449518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.981 [2024-07-10 14:39:04.449551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.981 [2024-07-10 14:39:04.449574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.981 [2024-07-10 14:39:04.449593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:54.981 [2024-07-10 14:39:04.449634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:54.981 qpair failed and we were unable to recover it. 00:36:55.241 [2024-07-10 14:39:04.459566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.241 [2024-07-10 14:39:04.459744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.241 [2024-07-10 14:39:04.459785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.241 [2024-07-10 14:39:04.459810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.241 [2024-07-10 14:39:04.459829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.241 [2024-07-10 14:39:04.459872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-07-10 14:39:04.469472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.241 [2024-07-10 14:39:04.469641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.241 [2024-07-10 14:39:04.469675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.241 [2024-07-10 14:39:04.469699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.241 [2024-07-10 14:39:04.469718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.241 [2024-07-10 14:39:04.469759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-07-10 14:39:04.479434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.241 [2024-07-10 14:39:04.479599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.241 [2024-07-10 14:39:04.479632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.241 [2024-07-10 14:39:04.479654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.241 [2024-07-10 14:39:04.479673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.241 [2024-07-10 14:39:04.479713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-07-10 14:39:04.489545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.241 [2024-07-10 14:39:04.489713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.241 [2024-07-10 14:39:04.489746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.241 [2024-07-10 14:39:04.489769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.241 [2024-07-10 14:39:04.489788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.241 [2024-07-10 14:39:04.489828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-07-10 14:39:04.499548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.241 [2024-07-10 14:39:04.499716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.241 [2024-07-10 14:39:04.499748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.241 [2024-07-10 14:39:04.499771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.241 [2024-07-10 14:39:04.499795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.241 [2024-07-10 14:39:04.499837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-07-10 14:39:04.509593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.241 [2024-07-10 14:39:04.509776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.241 [2024-07-10 14:39:04.509811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.241 [2024-07-10 14:39:04.509834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.241 [2024-07-10 14:39:04.509852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.241 [2024-07-10 14:39:04.509894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.242 [2024-07-10 14:39:04.519626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.242 [2024-07-10 14:39:04.519794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.242 [2024-07-10 14:39:04.519828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.242 [2024-07-10 14:39:04.519851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.242 [2024-07-10 14:39:04.519870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.242 [2024-07-10 14:39:04.519910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-07-10 14:39:04.529614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.242 [2024-07-10 14:39:04.529788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.242 [2024-07-10 14:39:04.529821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.242 [2024-07-10 14:39:04.529844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.242 [2024-07-10 14:39:04.529863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.242 [2024-07-10 14:39:04.529902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-07-10 14:39:04.539596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.242 [2024-07-10 14:39:04.539756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.242 [2024-07-10 14:39:04.539790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.242 [2024-07-10 14:39:04.539812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.242 [2024-07-10 14:39:04.539830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.242 [2024-07-10 14:39:04.539871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-07-10 14:39:04.549730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.242 [2024-07-10 14:39:04.549915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.242 [2024-07-10 14:39:04.549947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.242 [2024-07-10 14:39:04.549970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.242 [2024-07-10 14:39:04.549989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.242 [2024-07-10 14:39:04.550037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-07-10 14:39:04.559811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.242 [2024-07-10 14:39:04.559987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.242 [2024-07-10 14:39:04.560020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.242 [2024-07-10 14:39:04.560042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.242 [2024-07-10 14:39:04.560061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.242 [2024-07-10 14:39:04.560101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-07-10 14:39:04.569732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.242 [2024-07-10 14:39:04.569904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.242 [2024-07-10 14:39:04.569936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.242 [2024-07-10 14:39:04.569958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.242 [2024-07-10 14:39:04.569977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.242 [2024-07-10 14:39:04.570018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-07-10 14:39:04.579733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.242 [2024-07-10 14:39:04.579908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.242 [2024-07-10 14:39:04.579941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.242 [2024-07-10 14:39:04.579964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.242 [2024-07-10 14:39:04.579982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.242 [2024-07-10 14:39:04.580023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-07-10 14:39:04.589801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.242 [2024-07-10 14:39:04.590024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.242 [2024-07-10 14:39:04.590057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.242 [2024-07-10 14:39:04.590086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.242 [2024-07-10 14:39:04.590105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.242 [2024-07-10 14:39:04.590145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-07-10 14:39:04.599798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.242 [2024-07-10 14:39:04.599966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.242 [2024-07-10 14:39:04.599999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.242 [2024-07-10 14:39:04.600022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.242 [2024-07-10 14:39:04.600040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.242 [2024-07-10 14:39:04.600080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-07-10 14:39:04.609875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.242 [2024-07-10 14:39:04.610038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.242 [2024-07-10 14:39:04.610070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.242 [2024-07-10 14:39:04.610092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.242 [2024-07-10 14:39:04.610111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.243 [2024-07-10 14:39:04.610152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-07-10 14:39:04.619814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.243 [2024-07-10 14:39:04.619973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.243 [2024-07-10 14:39:04.620016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.243 [2024-07-10 14:39:04.620040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.243 [2024-07-10 14:39:04.620058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.243 [2024-07-10 14:39:04.620099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-07-10 14:39:04.629904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.243 [2024-07-10 14:39:04.630079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.243 [2024-07-10 14:39:04.630111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.243 [2024-07-10 14:39:04.630133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.243 [2024-07-10 14:39:04.630152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.243 [2024-07-10 14:39:04.630192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-07-10 14:39:04.639895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.243 [2024-07-10 14:39:04.640091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.243 [2024-07-10 14:39:04.640124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.243 [2024-07-10 14:39:04.640146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.243 [2024-07-10 14:39:04.640163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.243 [2024-07-10 14:39:04.640202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-07-10 14:39:04.649980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.243 [2024-07-10 14:39:04.650147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.243 [2024-07-10 14:39:04.650181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.243 [2024-07-10 14:39:04.650203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.243 [2024-07-10 14:39:04.650222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.243 [2024-07-10 14:39:04.650263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-07-10 14:39:04.660012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.243 [2024-07-10 14:39:04.660184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.243 [2024-07-10 14:39:04.660218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.243 [2024-07-10 14:39:04.660240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.243 [2024-07-10 14:39:04.660258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.243 [2024-07-10 14:39:04.660299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-07-10 14:39:04.669978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.243 [2024-07-10 14:39:04.670153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.243 [2024-07-10 14:39:04.670186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.243 [2024-07-10 14:39:04.670209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.243 [2024-07-10 14:39:04.670227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.243 [2024-07-10 14:39:04.670267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-07-10 14:39:04.680054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.243 [2024-07-10 14:39:04.680222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.243 [2024-07-10 14:39:04.680260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.243 [2024-07-10 14:39:04.680284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.243 [2024-07-10 14:39:04.680303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.243 [2024-07-10 14:39:04.680344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-07-10 14:39:04.690072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.243 [2024-07-10 14:39:04.690267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.243 [2024-07-10 14:39:04.690301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.243 [2024-07-10 14:39:04.690323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.243 [2024-07-10 14:39:04.690342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.243 [2024-07-10 14:39:04.690383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-07-10 14:39:04.700066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.243 [2024-07-10 14:39:04.700230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.243 [2024-07-10 14:39:04.700263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.243 [2024-07-10 14:39:04.700286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.243 [2024-07-10 14:39:04.700305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.243 [2024-07-10 14:39:04.700345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-07-10 14:39:04.710114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.243 [2024-07-10 14:39:04.710329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.243 [2024-07-10 14:39:04.710362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.243 [2024-07-10 14:39:04.710386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.243 [2024-07-10 14:39:04.710404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.243 [2024-07-10 14:39:04.710451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-07-10 14:39:04.720157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.243 [2024-07-10 14:39:04.720353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.243 [2024-07-10 14:39:04.720394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.243 [2024-07-10 14:39:04.720444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.243 [2024-07-10 14:39:04.720477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.244 [2024-07-10 14:39:04.720552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.503 [2024-07-10 14:39:04.730191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.503 [2024-07-10 14:39:04.730359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.503 [2024-07-10 14:39:04.730393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.503 [2024-07-10 14:39:04.730414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.503 [2024-07-10 14:39:04.730444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.503 [2024-07-10 14:39:04.730488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.503 qpair failed and we were unable to recover it. 00:36:55.503 [2024-07-10 14:39:04.740284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.503 [2024-07-10 14:39:04.740505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.503 [2024-07-10 14:39:04.740553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.503 [2024-07-10 14:39:04.740577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.503 [2024-07-10 14:39:04.740596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.503 [2024-07-10 14:39:04.740637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.503 qpair failed and we were unable to recover it. 00:36:55.503 [2024-07-10 14:39:04.750283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.503 [2024-07-10 14:39:04.750501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.503 [2024-07-10 14:39:04.750540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.503 [2024-07-10 14:39:04.750562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.503 [2024-07-10 14:39:04.750581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.503 [2024-07-10 14:39:04.750622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.503 qpair failed and we were unable to recover it. 00:36:55.503 [2024-07-10 14:39:04.760280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.503 [2024-07-10 14:39:04.760487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.503 [2024-07-10 14:39:04.760521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.503 [2024-07-10 14:39:04.760544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.503 [2024-07-10 14:39:04.760563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.503 [2024-07-10 14:39:04.760603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.503 qpair failed and we were unable to recover it. 00:36:55.503 [2024-07-10 14:39:04.770331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.503 [2024-07-10 14:39:04.770495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.503 [2024-07-10 14:39:04.770535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.503 [2024-07-10 14:39:04.770559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.503 [2024-07-10 14:39:04.770578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.503 [2024-07-10 14:39:04.770619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.503 qpair failed and we were unable to recover it. 00:36:55.503 [2024-07-10 14:39:04.780305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.503 [2024-07-10 14:39:04.780468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.503 [2024-07-10 14:39:04.780502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.503 [2024-07-10 14:39:04.780524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.503 [2024-07-10 14:39:04.780543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.503 [2024-07-10 14:39:04.780584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.503 qpair failed and we were unable to recover it. 00:36:55.503 [2024-07-10 14:39:04.790423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.503 [2024-07-10 14:39:04.790613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.503 [2024-07-10 14:39:04.790647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.503 [2024-07-10 14:39:04.790670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.504 [2024-07-10 14:39:04.790688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.504 [2024-07-10 14:39:04.790728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.504 qpair failed and we were unable to recover it. 00:36:55.504 [2024-07-10 14:39:04.800367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.504 [2024-07-10 14:39:04.800551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.504 [2024-07-10 14:39:04.800585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.504 [2024-07-10 14:39:04.800608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.504 [2024-07-10 14:39:04.800627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.504 [2024-07-10 14:39:04.800667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.504 qpair failed and we were unable to recover it. 00:36:55.504 [2024-07-10 14:39:04.810518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.504 [2024-07-10 14:39:04.810683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.504 [2024-07-10 14:39:04.810717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.504 [2024-07-10 14:39:04.810740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.504 [2024-07-10 14:39:04.810759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.504 [2024-07-10 14:39:04.810805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.504 qpair failed and we were unable to recover it. 00:36:55.504 [2024-07-10 14:39:04.820414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.504 [2024-07-10 14:39:04.820600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.504 [2024-07-10 14:39:04.820634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.504 [2024-07-10 14:39:04.820657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.504 [2024-07-10 14:39:04.820676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.504 [2024-07-10 14:39:04.820717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.504 qpair failed and we were unable to recover it. 00:36:55.504 [2024-07-10 14:39:04.830450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.504 [2024-07-10 14:39:04.830622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.504 [2024-07-10 14:39:04.830655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.504 [2024-07-10 14:39:04.830677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.504 [2024-07-10 14:39:04.830696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.504 [2024-07-10 14:39:04.830736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.504 qpair failed and we were unable to recover it. 00:36:55.504 [2024-07-10 14:39:04.840493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.504 [2024-07-10 14:39:04.840706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.504 [2024-07-10 14:39:04.840739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.504 [2024-07-10 14:39:04.840762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.504 [2024-07-10 14:39:04.840781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.504 [2024-07-10 14:39:04.840821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.504 qpair failed and we were unable to recover it. 00:36:55.504 [2024-07-10 14:39:04.850611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.504 [2024-07-10 14:39:04.850839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.504 [2024-07-10 14:39:04.850871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.504 [2024-07-10 14:39:04.850893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.504 [2024-07-10 14:39:04.850912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.504 [2024-07-10 14:39:04.850952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.504 qpair failed and we were unable to recover it. 00:36:55.504 [2024-07-10 14:39:04.860588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.504 [2024-07-10 14:39:04.860802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.504 [2024-07-10 14:39:04.860835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.504 [2024-07-10 14:39:04.860857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.504 [2024-07-10 14:39:04.860875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.504 [2024-07-10 14:39:04.860916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.504 qpair failed and we were unable to recover it. 00:36:55.504 [2024-07-10 14:39:04.870613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.504 [2024-07-10 14:39:04.870830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.504 [2024-07-10 14:39:04.870864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.504 [2024-07-10 14:39:04.870886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.504 [2024-07-10 14:39:04.870904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.504 [2024-07-10 14:39:04.870959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.504 qpair failed and we were unable to recover it. 00:36:55.504 [2024-07-10 14:39:04.880604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.504 [2024-07-10 14:39:04.880813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.504 [2024-07-10 14:39:04.880850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.504 [2024-07-10 14:39:04.880876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.504 [2024-07-10 14:39:04.880896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.504 [2024-07-10 14:39:04.880937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.504 qpair failed and we were unable to recover it. 00:36:55.504 [2024-07-10 14:39:04.890642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.504 [2024-07-10 14:39:04.890816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.504 [2024-07-10 14:39:04.890850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.504 [2024-07-10 14:39:04.890872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.504 [2024-07-10 14:39:04.890891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.504 [2024-07-10 14:39:04.890932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.504 qpair failed and we were unable to recover it. 00:36:55.504 [2024-07-10 14:39:04.900676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.504 [2024-07-10 14:39:04.900844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.504 [2024-07-10 14:39:04.900878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.504 [2024-07-10 14:39:04.900900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.504 [2024-07-10 14:39:04.900925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.504 [2024-07-10 14:39:04.900966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.504 qpair failed and we were unable to recover it. 00:36:55.504 [2024-07-10 14:39:04.910774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.504 [2024-07-10 14:39:04.910955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.504 [2024-07-10 14:39:04.910987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.504 [2024-07-10 14:39:04.911009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.504 [2024-07-10 14:39:04.911028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.504 [2024-07-10 14:39:04.911069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.504 qpair failed and we were unable to recover it. 00:36:55.504 [2024-07-10 14:39:04.920765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.504 [2024-07-10 14:39:04.920930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.504 [2024-07-10 14:39:04.920963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.504 [2024-07-10 14:39:04.920986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.504 [2024-07-10 14:39:04.921004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.504 [2024-07-10 14:39:04.921044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.504 qpair failed and we were unable to recover it. 00:36:55.504 [2024-07-10 14:39:04.930824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.504 [2024-07-10 14:39:04.930997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.504 [2024-07-10 14:39:04.931030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.504 [2024-07-10 14:39:04.931058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.504 [2024-07-10 14:39:04.931077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.504 [2024-07-10 14:39:04.931117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.504 qpair failed and we were unable to recover it. 00:36:55.505 [2024-07-10 14:39:04.940748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.505 [2024-07-10 14:39:04.940910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.505 [2024-07-10 14:39:04.940943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.505 [2024-07-10 14:39:04.940967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.505 [2024-07-10 14:39:04.940985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.505 [2024-07-10 14:39:04.941031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.505 qpair failed and we were unable to recover it. 00:36:55.505 [2024-07-10 14:39:04.950854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.505 [2024-07-10 14:39:04.951095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.505 [2024-07-10 14:39:04.951129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.505 [2024-07-10 14:39:04.951157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.505 [2024-07-10 14:39:04.951177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.505 [2024-07-10 14:39:04.951217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.505 qpair failed and we were unable to recover it. 00:36:55.505 [2024-07-10 14:39:04.960802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.505 [2024-07-10 14:39:04.960983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.505 [2024-07-10 14:39:04.961016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.505 [2024-07-10 14:39:04.961039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.505 [2024-07-10 14:39:04.961058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.505 [2024-07-10 14:39:04.961098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.505 qpair failed and we were unable to recover it. 00:36:55.505 [2024-07-10 14:39:04.970906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.505 [2024-07-10 14:39:04.971090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.505 [2024-07-10 14:39:04.971123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.505 [2024-07-10 14:39:04.971145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.505 [2024-07-10 14:39:04.971164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.505 [2024-07-10 14:39:04.971205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.505 qpair failed and we were unable to recover it. 00:36:55.505 [2024-07-10 14:39:04.980996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.505 [2024-07-10 14:39:04.981171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.505 [2024-07-10 14:39:04.981216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.505 [2024-07-10 14:39:04.981243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.505 [2024-07-10 14:39:04.981263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.505 [2024-07-10 14:39:04.981306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.505 qpair failed and we were unable to recover it. 00:36:55.764 [2024-07-10 14:39:04.990953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.764 [2024-07-10 14:39:04.991123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.764 [2024-07-10 14:39:04.991158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.764 [2024-07-10 14:39:04.991188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.764 [2024-07-10 14:39:04.991208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.764 [2024-07-10 14:39:04.991250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.764 qpair failed and we were unable to recover it. 00:36:55.764 [2024-07-10 14:39:05.001016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.764 [2024-07-10 14:39:05.001204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.764 [2024-07-10 14:39:05.001237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.764 [2024-07-10 14:39:05.001261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.764 [2024-07-10 14:39:05.001280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.764 [2024-07-10 14:39:05.001320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.764 qpair failed and we were unable to recover it. 00:36:55.764 [2024-07-10 14:39:05.011085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.764 [2024-07-10 14:39:05.011296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.764 [2024-07-10 14:39:05.011333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.764 [2024-07-10 14:39:05.011358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.764 [2024-07-10 14:39:05.011377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.764 [2024-07-10 14:39:05.011434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.764 qpair failed and we were unable to recover it. 00:36:55.764 [2024-07-10 14:39:05.021032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.764 [2024-07-10 14:39:05.021189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.764 [2024-07-10 14:39:05.021223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.764 [2024-07-10 14:39:05.021246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.764 [2024-07-10 14:39:05.021265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.764 [2024-07-10 14:39:05.021306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.764 qpair failed and we were unable to recover it. 00:36:55.764 [2024-07-10 14:39:05.031129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.764 [2024-07-10 14:39:05.031303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.764 [2024-07-10 14:39:05.031341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.764 [2024-07-10 14:39:05.031364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.764 [2024-07-10 14:39:05.031382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.764 [2024-07-10 14:39:05.031438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.764 qpair failed and we were unable to recover it. 00:36:55.764 [2024-07-10 14:39:05.041035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.764 [2024-07-10 14:39:05.041205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.764 [2024-07-10 14:39:05.041238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.764 [2024-07-10 14:39:05.041261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.764 [2024-07-10 14:39:05.041280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.764 [2024-07-10 14:39:05.041320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.764 qpair failed and we were unable to recover it. 00:36:55.764 [2024-07-10 14:39:05.051108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.764 [2024-07-10 14:39:05.051272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.764 [2024-07-10 14:39:05.051305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.764 [2024-07-10 14:39:05.051328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.764 [2024-07-10 14:39:05.051347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.764 [2024-07-10 14:39:05.051387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.764 qpair failed and we were unable to recover it. 00:36:55.764 [2024-07-10 14:39:05.061150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.764 [2024-07-10 14:39:05.061361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.764 [2024-07-10 14:39:05.061394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.764 [2024-07-10 14:39:05.061417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.764 [2024-07-10 14:39:05.061443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.764 [2024-07-10 14:39:05.061486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.764 qpair failed and we were unable to recover it. 00:36:55.764 [2024-07-10 14:39:05.071226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.764 [2024-07-10 14:39:05.071404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.764 [2024-07-10 14:39:05.071449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.764 [2024-07-10 14:39:05.071473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.764 [2024-07-10 14:39:05.071491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.764 [2024-07-10 14:39:05.071532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.764 qpair failed and we were unable to recover it. 00:36:55.764 [2024-07-10 14:39:05.081298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.765 [2024-07-10 14:39:05.081482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.765 [2024-07-10 14:39:05.081517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.765 [2024-07-10 14:39:05.081552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.765 [2024-07-10 14:39:05.081573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.765 [2024-07-10 14:39:05.081615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.765 qpair failed and we were unable to recover it. 00:36:55.765 [2024-07-10 14:39:05.091227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.765 [2024-07-10 14:39:05.091390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.765 [2024-07-10 14:39:05.091441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.765 [2024-07-10 14:39:05.091468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.765 [2024-07-10 14:39:05.091487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.765 [2024-07-10 14:39:05.091527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.765 qpair failed and we were unable to recover it. 00:36:55.765 [2024-07-10 14:39:05.101237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.765 [2024-07-10 14:39:05.101420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.765 [2024-07-10 14:39:05.101460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.765 [2024-07-10 14:39:05.101483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.765 [2024-07-10 14:39:05.101501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.765 [2024-07-10 14:39:05.101542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.765 qpair failed and we were unable to recover it. 00:36:55.765 [2024-07-10 14:39:05.111321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.765 [2024-07-10 14:39:05.111533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.765 [2024-07-10 14:39:05.111566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.765 [2024-07-10 14:39:05.111589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.765 [2024-07-10 14:39:05.111608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.765 [2024-07-10 14:39:05.111649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.765 qpair failed and we were unable to recover it. 00:36:55.765 [2024-07-10 14:39:05.121297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.765 [2024-07-10 14:39:05.121515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.765 [2024-07-10 14:39:05.121551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.765 [2024-07-10 14:39:05.121574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.765 [2024-07-10 14:39:05.121593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.765 [2024-07-10 14:39:05.121635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.765 qpair failed and we were unable to recover it. 00:36:55.765 [2024-07-10 14:39:05.131370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.765 [2024-07-10 14:39:05.131547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.765 [2024-07-10 14:39:05.131581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.765 [2024-07-10 14:39:05.131617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.765 [2024-07-10 14:39:05.131635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.765 [2024-07-10 14:39:05.131677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.765 qpair failed and we were unable to recover it. 00:36:55.765 [2024-07-10 14:39:05.141410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.765 [2024-07-10 14:39:05.141591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.765 [2024-07-10 14:39:05.141630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.765 [2024-07-10 14:39:05.141652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.765 [2024-07-10 14:39:05.141671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.765 [2024-07-10 14:39:05.141712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.765 qpair failed and we were unable to recover it. 00:36:55.765 [2024-07-10 14:39:05.151350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.765 [2024-07-10 14:39:05.151529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.765 [2024-07-10 14:39:05.151562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.765 [2024-07-10 14:39:05.151584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.765 [2024-07-10 14:39:05.151602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.765 [2024-07-10 14:39:05.151643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.765 qpair failed and we were unable to recover it. 00:36:55.765 [2024-07-10 14:39:05.161448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.765 [2024-07-10 14:39:05.161615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.765 [2024-07-10 14:39:05.161648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.765 [2024-07-10 14:39:05.161677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.765 [2024-07-10 14:39:05.161695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.765 [2024-07-10 14:39:05.161736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.765 qpair failed and we were unable to recover it. 00:36:55.765 [2024-07-10 14:39:05.171632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.765 [2024-07-10 14:39:05.171826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.765 [2024-07-10 14:39:05.171864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.765 [2024-07-10 14:39:05.171888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.765 [2024-07-10 14:39:05.171906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.765 [2024-07-10 14:39:05.171947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.765 qpair failed and we were unable to recover it. 00:36:55.765 [2024-07-10 14:39:05.181483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.765 [2024-07-10 14:39:05.181661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.765 [2024-07-10 14:39:05.181694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.765 [2024-07-10 14:39:05.181717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.765 [2024-07-10 14:39:05.181736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.765 [2024-07-10 14:39:05.181776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.765 qpair failed and we were unable to recover it. 00:36:55.765 [2024-07-10 14:39:05.191501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.765 [2024-07-10 14:39:05.191676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.765 [2024-07-10 14:39:05.191709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.765 [2024-07-10 14:39:05.191732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.765 [2024-07-10 14:39:05.191750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.765 [2024-07-10 14:39:05.191791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.765 qpair failed and we were unable to recover it. 00:36:55.765 [2024-07-10 14:39:05.201599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.765 [2024-07-10 14:39:05.201774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.765 [2024-07-10 14:39:05.201807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.765 [2024-07-10 14:39:05.201830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.765 [2024-07-10 14:39:05.201848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.765 [2024-07-10 14:39:05.201899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.765 qpair failed and we were unable to recover it. 00:36:55.765 [2024-07-10 14:39:05.211589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.765 [2024-07-10 14:39:05.211767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.765 [2024-07-10 14:39:05.211800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.765 [2024-07-10 14:39:05.211823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.765 [2024-07-10 14:39:05.211841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.765 [2024-07-10 14:39:05.211887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.765 qpair failed and we were unable to recover it. 00:36:55.765 [2024-07-10 14:39:05.221612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.765 [2024-07-10 14:39:05.221776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.766 [2024-07-10 14:39:05.221814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.766 [2024-07-10 14:39:05.221835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.766 [2024-07-10 14:39:05.221854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.766 [2024-07-10 14:39:05.221895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.766 qpair failed and we were unable to recover it. 00:36:55.766 [2024-07-10 14:39:05.231615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.766 [2024-07-10 14:39:05.231798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.766 [2024-07-10 14:39:05.231833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.766 [2024-07-10 14:39:05.231856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.766 [2024-07-10 14:39:05.231874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.766 [2024-07-10 14:39:05.231915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.766 qpair failed and we were unable to recover it. 00:36:55.766 [2024-07-10 14:39:05.241663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.766 [2024-07-10 14:39:05.241912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.766 [2024-07-10 14:39:05.241957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.766 [2024-07-10 14:39:05.241980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.766 [2024-07-10 14:39:05.241999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:55.766 [2024-07-10 14:39:05.242044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:55.766 qpair failed and we were unable to recover it. 00:36:56.024 [2024-07-10 14:39:05.251713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.024 [2024-07-10 14:39:05.251939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.024 [2024-07-10 14:39:05.251984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.024 [2024-07-10 14:39:05.252007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.024 [2024-07-10 14:39:05.252026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.025 [2024-07-10 14:39:05.252067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.025 qpair failed and we were unable to recover it. 00:36:56.025 [2024-07-10 14:39:05.261747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.025 [2024-07-10 14:39:05.261929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.025 [2024-07-10 14:39:05.261968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.025 [2024-07-10 14:39:05.261991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.025 [2024-07-10 14:39:05.262009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.025 [2024-07-10 14:39:05.262050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.025 qpair failed and we were unable to recover it. 00:36:56.025 [2024-07-10 14:39:05.271789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.025 [2024-07-10 14:39:05.271996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.025 [2024-07-10 14:39:05.272030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.025 [2024-07-10 14:39:05.272053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.025 [2024-07-10 14:39:05.272072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.025 [2024-07-10 14:39:05.272113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.025 qpair failed and we were unable to recover it. 00:36:56.025 [2024-07-10 14:39:05.281748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.025 [2024-07-10 14:39:05.281940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.025 [2024-07-10 14:39:05.281974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.025 [2024-07-10 14:39:05.281997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.025 [2024-07-10 14:39:05.282016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.025 [2024-07-10 14:39:05.282058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.025 qpair failed and we were unable to recover it. 00:36:56.025 [2024-07-10 14:39:05.291888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.025 [2024-07-10 14:39:05.292049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.025 [2024-07-10 14:39:05.292083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.025 [2024-07-10 14:39:05.292106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.025 [2024-07-10 14:39:05.292124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.025 [2024-07-10 14:39:05.292165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.025 qpair failed and we were unable to recover it. 00:36:56.025 [2024-07-10 14:39:05.301824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.025 [2024-07-10 14:39:05.301989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.025 [2024-07-10 14:39:05.302023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.025 [2024-07-10 14:39:05.302045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.025 [2024-07-10 14:39:05.302069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.025 [2024-07-10 14:39:05.302111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.025 qpair failed and we were unable to recover it. 00:36:56.025 [2024-07-10 14:39:05.311859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.025 [2024-07-10 14:39:05.312066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.025 [2024-07-10 14:39:05.312099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.025 [2024-07-10 14:39:05.312122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.025 [2024-07-10 14:39:05.312140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.025 [2024-07-10 14:39:05.312181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.025 qpair failed and we were unable to recover it. 00:36:56.025 [2024-07-10 14:39:05.321866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.025 [2024-07-10 14:39:05.322034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.025 [2024-07-10 14:39:05.322067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.025 [2024-07-10 14:39:05.322090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.025 [2024-07-10 14:39:05.322108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.025 [2024-07-10 14:39:05.322148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.025 qpair failed and we were unable to recover it. 00:36:56.025 [2024-07-10 14:39:05.332039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.025 [2024-07-10 14:39:05.332206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.025 [2024-07-10 14:39:05.332239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.025 [2024-07-10 14:39:05.332262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.025 [2024-07-10 14:39:05.332280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.025 [2024-07-10 14:39:05.332326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.025 qpair failed and we were unable to recover it. 00:36:56.025 [2024-07-10 14:39:05.341974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.025 [2024-07-10 14:39:05.342146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.025 [2024-07-10 14:39:05.342180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.025 [2024-07-10 14:39:05.342202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.025 [2024-07-10 14:39:05.342220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.025 [2024-07-10 14:39:05.342261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.025 qpair failed and we were unable to recover it. 00:36:56.025 [2024-07-10 14:39:05.351962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.025 [2024-07-10 14:39:05.352135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.025 [2024-07-10 14:39:05.352168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.025 [2024-07-10 14:39:05.352190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.025 [2024-07-10 14:39:05.352209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.025 [2024-07-10 14:39:05.352250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.026 qpair failed and we were unable to recover it. 00:36:56.026 [2024-07-10 14:39:05.361972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.026 [2024-07-10 14:39:05.362137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.026 [2024-07-10 14:39:05.362171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.026 [2024-07-10 14:39:05.362193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.026 [2024-07-10 14:39:05.362212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.026 [2024-07-10 14:39:05.362253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.026 qpair failed and we were unable to recover it. 00:36:56.026 [2024-07-10 14:39:05.372027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.026 [2024-07-10 14:39:05.372189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.026 [2024-07-10 14:39:05.372222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.026 [2024-07-10 14:39:05.372245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.026 [2024-07-10 14:39:05.372263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.026 [2024-07-10 14:39:05.372307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.026 qpair failed and we were unable to recover it. 00:36:56.026 [2024-07-10 14:39:05.382023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.026 [2024-07-10 14:39:05.382187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.026 [2024-07-10 14:39:05.382220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.026 [2024-07-10 14:39:05.382242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.026 [2024-07-10 14:39:05.382261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.026 [2024-07-10 14:39:05.382301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.026 qpair failed and we were unable to recover it. 00:36:56.026 [2024-07-10 14:39:05.392120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.026 [2024-07-10 14:39:05.392299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.026 [2024-07-10 14:39:05.392335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.026 [2024-07-10 14:39:05.392364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.026 [2024-07-10 14:39:05.392383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.026 [2024-07-10 14:39:05.392430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.026 qpair failed and we were unable to recover it. 00:36:56.026 [2024-07-10 14:39:05.402111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.026 [2024-07-10 14:39:05.402278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.026 [2024-07-10 14:39:05.402311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.026 [2024-07-10 14:39:05.402334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.026 [2024-07-10 14:39:05.402357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.026 [2024-07-10 14:39:05.402399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.026 qpair failed and we were unable to recover it. 00:36:56.026 [2024-07-10 14:39:05.412140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.026 [2024-07-10 14:39:05.412333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.026 [2024-07-10 14:39:05.412367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.026 [2024-07-10 14:39:05.412389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.026 [2024-07-10 14:39:05.412408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.026 [2024-07-10 14:39:05.412456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.026 qpair failed and we were unable to recover it. 00:36:56.026 [2024-07-10 14:39:05.422140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.026 [2024-07-10 14:39:05.422303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.026 [2024-07-10 14:39:05.422337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.026 [2024-07-10 14:39:05.422360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.026 [2024-07-10 14:39:05.422378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.026 [2024-07-10 14:39:05.422418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.026 qpair failed and we were unable to recover it. 00:36:56.026 [2024-07-10 14:39:05.432168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.026 [2024-07-10 14:39:05.432353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.026 [2024-07-10 14:39:05.432386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.026 [2024-07-10 14:39:05.432408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.026 [2024-07-10 14:39:05.432438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.026 [2024-07-10 14:39:05.432482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.026 qpair failed and we were unable to recover it. 00:36:56.026 [2024-07-10 14:39:05.442209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.026 [2024-07-10 14:39:05.442392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.026 [2024-07-10 14:39:05.442433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.026 [2024-07-10 14:39:05.442457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.026 [2024-07-10 14:39:05.442476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.026 [2024-07-10 14:39:05.442517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.026 qpair failed and we were unable to recover it. 00:36:56.026 [2024-07-10 14:39:05.452276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.026 [2024-07-10 14:39:05.452444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.026 [2024-07-10 14:39:05.452478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.026 [2024-07-10 14:39:05.452501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.026 [2024-07-10 14:39:05.452519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.026 [2024-07-10 14:39:05.452560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.027 qpair failed and we were unable to recover it. 00:36:56.027 [2024-07-10 14:39:05.462279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.027 [2024-07-10 14:39:05.462460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.027 [2024-07-10 14:39:05.462494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.027 [2024-07-10 14:39:05.462517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.027 [2024-07-10 14:39:05.462536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.027 [2024-07-10 14:39:05.462576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.027 qpair failed and we were unable to recover it. 00:36:56.027 [2024-07-10 14:39:05.472358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.027 [2024-07-10 14:39:05.472536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.027 [2024-07-10 14:39:05.472569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.027 [2024-07-10 14:39:05.472592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.027 [2024-07-10 14:39:05.472611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.027 [2024-07-10 14:39:05.472650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.027 qpair failed and we were unable to recover it. 00:36:56.027 [2024-07-10 14:39:05.482355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.027 [2024-07-10 14:39:05.482580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.027 [2024-07-10 14:39:05.482615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.027 [2024-07-10 14:39:05.482643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.027 [2024-07-10 14:39:05.482662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.027 [2024-07-10 14:39:05.482703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.027 qpair failed and we were unable to recover it. 00:36:56.027 [2024-07-10 14:39:05.492343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.027 [2024-07-10 14:39:05.492548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.027 [2024-07-10 14:39:05.492581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.027 [2024-07-10 14:39:05.492603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.027 [2024-07-10 14:39:05.492622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.027 [2024-07-10 14:39:05.492663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.027 qpair failed and we were unable to recover it. 00:36:56.027 [2024-07-10 14:39:05.502474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.027 [2024-07-10 14:39:05.502642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.027 [2024-07-10 14:39:05.502677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.027 [2024-07-10 14:39:05.502701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.027 [2024-07-10 14:39:05.502720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.027 [2024-07-10 14:39:05.502769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.027 qpair failed and we were unable to recover it. 00:36:56.286 [2024-07-10 14:39:05.512524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.286 [2024-07-10 14:39:05.512710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.286 [2024-07-10 14:39:05.512745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.286 [2024-07-10 14:39:05.512768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.286 [2024-07-10 14:39:05.512787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.286 [2024-07-10 14:39:05.512828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.286 qpair failed and we were unable to recover it. 00:36:56.286 [2024-07-10 14:39:05.522415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.286 [2024-07-10 14:39:05.522623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.286 [2024-07-10 14:39:05.522657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.286 [2024-07-10 14:39:05.522680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.286 [2024-07-10 14:39:05.522699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.286 [2024-07-10 14:39:05.522739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.286 qpair failed and we were unable to recover it. 00:36:56.286 [2024-07-10 14:39:05.532513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.286 [2024-07-10 14:39:05.532694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.286 [2024-07-10 14:39:05.532733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.286 [2024-07-10 14:39:05.532756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.286 [2024-07-10 14:39:05.532774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.286 [2024-07-10 14:39:05.532814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.286 qpair failed and we were unable to recover it. 00:36:56.286 [2024-07-10 14:39:05.542519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.286 [2024-07-10 14:39:05.542690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.286 [2024-07-10 14:39:05.542723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.286 [2024-07-10 14:39:05.542745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.286 [2024-07-10 14:39:05.542764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.286 [2024-07-10 14:39:05.542804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.286 qpair failed and we were unable to recover it. 00:36:56.286 [2024-07-10 14:39:05.552525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.286 [2024-07-10 14:39:05.552693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.286 [2024-07-10 14:39:05.552726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.286 [2024-07-10 14:39:05.552749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.286 [2024-07-10 14:39:05.552778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.286 [2024-07-10 14:39:05.552818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.286 qpair failed and we were unable to recover it. 00:36:56.286 [2024-07-10 14:39:05.562554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.286 [2024-07-10 14:39:05.562730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.286 [2024-07-10 14:39:05.562763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.286 [2024-07-10 14:39:05.562786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.286 [2024-07-10 14:39:05.562805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.286 [2024-07-10 14:39:05.562845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.286 qpair failed and we were unable to recover it. 00:36:56.286 [2024-07-10 14:39:05.572611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.286 [2024-07-10 14:39:05.572774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.286 [2024-07-10 14:39:05.572812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.286 [2024-07-10 14:39:05.572835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.286 [2024-07-10 14:39:05.572854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.286 [2024-07-10 14:39:05.572894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.286 qpair failed and we were unable to recover it. 00:36:56.286 [2024-07-10 14:39:05.582581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.286 [2024-07-10 14:39:05.582756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.286 [2024-07-10 14:39:05.582789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.286 [2024-07-10 14:39:05.582810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.286 [2024-07-10 14:39:05.582829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.286 [2024-07-10 14:39:05.582869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.286 qpair failed and we were unable to recover it. 00:36:56.286 [2024-07-10 14:39:05.592642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.286 [2024-07-10 14:39:05.592860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.286 [2024-07-10 14:39:05.592892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.286 [2024-07-10 14:39:05.592914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.286 [2024-07-10 14:39:05.592933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.286 [2024-07-10 14:39:05.592973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.286 qpair failed and we were unable to recover it. 00:36:56.286 [2024-07-10 14:39:05.602631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.286 [2024-07-10 14:39:05.602796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.286 [2024-07-10 14:39:05.602829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.286 [2024-07-10 14:39:05.602851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.286 [2024-07-10 14:39:05.602870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.286 [2024-07-10 14:39:05.602911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.286 qpair failed and we were unable to recover it. 00:36:56.286 [2024-07-10 14:39:05.612740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.286 [2024-07-10 14:39:05.612927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.286 [2024-07-10 14:39:05.612960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.286 [2024-07-10 14:39:05.612982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.286 [2024-07-10 14:39:05.613001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.286 [2024-07-10 14:39:05.613047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.286 qpair failed and we were unable to recover it. 00:36:56.287 [2024-07-10 14:39:05.622707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.287 [2024-07-10 14:39:05.622864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.287 [2024-07-10 14:39:05.622896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.287 [2024-07-10 14:39:05.622918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.287 [2024-07-10 14:39:05.622937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.287 [2024-07-10 14:39:05.622977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.287 qpair failed and we were unable to recover it. 00:36:56.287 [2024-07-10 14:39:05.632752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.287 [2024-07-10 14:39:05.632924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.287 [2024-07-10 14:39:05.632956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.287 [2024-07-10 14:39:05.632978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.287 [2024-07-10 14:39:05.632997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.287 [2024-07-10 14:39:05.633039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.287 qpair failed and we were unable to recover it. 00:36:56.287 [2024-07-10 14:39:05.642862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.287 [2024-07-10 14:39:05.643032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.287 [2024-07-10 14:39:05.643064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.287 [2024-07-10 14:39:05.643086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.287 [2024-07-10 14:39:05.643117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.287 [2024-07-10 14:39:05.643158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.287 qpair failed and we were unable to recover it. 00:36:56.287 [2024-07-10 14:39:05.652815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.287 [2024-07-10 14:39:05.652974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.287 [2024-07-10 14:39:05.653007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.287 [2024-07-10 14:39:05.653029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.287 [2024-07-10 14:39:05.653049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.287 [2024-07-10 14:39:05.653089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.287 qpair failed and we were unable to recover it. 00:36:56.287 [2024-07-10 14:39:05.662848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.287 [2024-07-10 14:39:05.663011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.287 [2024-07-10 14:39:05.663052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.287 [2024-07-10 14:39:05.663076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.287 [2024-07-10 14:39:05.663094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.287 [2024-07-10 14:39:05.663135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.287 qpair failed and we were unable to recover it. 00:36:56.287 [2024-07-10 14:39:05.672917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.287 [2024-07-10 14:39:05.673093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.287 [2024-07-10 14:39:05.673126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.287 [2024-07-10 14:39:05.673148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.287 [2024-07-10 14:39:05.673167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.287 [2024-07-10 14:39:05.673207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.287 qpair failed and we were unable to recover it. 00:36:56.287 [2024-07-10 14:39:05.682879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.287 [2024-07-10 14:39:05.683060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.287 [2024-07-10 14:39:05.683092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.287 [2024-07-10 14:39:05.683115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.287 [2024-07-10 14:39:05.683133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.287 [2024-07-10 14:39:05.683173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.287 qpair failed and we were unable to recover it. 00:36:56.287 [2024-07-10 14:39:05.692937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.287 [2024-07-10 14:39:05.693116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.287 [2024-07-10 14:39:05.693149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.287 [2024-07-10 14:39:05.693171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.287 [2024-07-10 14:39:05.693189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.287 [2024-07-10 14:39:05.693231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.287 qpair failed and we were unable to recover it. 00:36:56.287 [2024-07-10 14:39:05.702980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.287 [2024-07-10 14:39:05.703164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.287 [2024-07-10 14:39:05.703197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.287 [2024-07-10 14:39:05.703219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.287 [2024-07-10 14:39:05.703243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.287 [2024-07-10 14:39:05.703285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.287 qpair failed and we were unable to recover it. 00:36:56.287 [2024-07-10 14:39:05.713016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.287 [2024-07-10 14:39:05.713230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.287 [2024-07-10 14:39:05.713263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.287 [2024-07-10 14:39:05.713286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.287 [2024-07-10 14:39:05.713304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.287 [2024-07-10 14:39:05.713344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.287 qpair failed and we were unable to recover it. 00:36:56.287 [2024-07-10 14:39:05.723022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.287 [2024-07-10 14:39:05.723195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.287 [2024-07-10 14:39:05.723227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.287 [2024-07-10 14:39:05.723248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.287 [2024-07-10 14:39:05.723267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.287 [2024-07-10 14:39:05.723313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.287 qpair failed and we were unable to recover it. 00:36:56.287 [2024-07-10 14:39:05.733060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.287 [2024-07-10 14:39:05.733226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.287 [2024-07-10 14:39:05.733258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.287 [2024-07-10 14:39:05.733280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.287 [2024-07-10 14:39:05.733297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.287 [2024-07-10 14:39:05.733336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.287 qpair failed and we were unable to recover it. 00:36:56.287 [2024-07-10 14:39:05.743059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.287 [2024-07-10 14:39:05.743228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.287 [2024-07-10 14:39:05.743262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.287 [2024-07-10 14:39:05.743285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.287 [2024-07-10 14:39:05.743304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.287 [2024-07-10 14:39:05.743344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.287 qpair failed and we were unable to recover it. 00:36:56.287 [2024-07-10 14:39:05.753170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.287 [2024-07-10 14:39:05.753344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.287 [2024-07-10 14:39:05.753377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.287 [2024-07-10 14:39:05.753399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.287 [2024-07-10 14:39:05.753417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.287 [2024-07-10 14:39:05.753468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.287 qpair failed and we were unable to recover it. 00:36:56.287 [2024-07-10 14:39:05.763109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.288 [2024-07-10 14:39:05.763285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.288 [2024-07-10 14:39:05.763320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.288 [2024-07-10 14:39:05.763343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.288 [2024-07-10 14:39:05.763361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.288 [2024-07-10 14:39:05.763403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.288 qpair failed and we were unable to recover it. 00:36:56.546 [2024-07-10 14:39:05.773220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.546 [2024-07-10 14:39:05.773394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.546 [2024-07-10 14:39:05.773439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.546 [2024-07-10 14:39:05.773465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.546 [2024-07-10 14:39:05.773484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.546 [2024-07-10 14:39:05.773525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.546 qpair failed and we were unable to recover it. 00:36:56.546 [2024-07-10 14:39:05.783199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.546 [2024-07-10 14:39:05.783368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.546 [2024-07-10 14:39:05.783402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.546 [2024-07-10 14:39:05.783432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.546 [2024-07-10 14:39:05.783453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.546 [2024-07-10 14:39:05.783495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.546 qpair failed and we were unable to recover it. 00:36:56.546 [2024-07-10 14:39:05.793281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.546 [2024-07-10 14:39:05.793474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.546 [2024-07-10 14:39:05.793507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.546 [2024-07-10 14:39:05.793529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.546 [2024-07-10 14:39:05.793554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.546 [2024-07-10 14:39:05.793597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.546 qpair failed and we were unable to recover it. 00:36:56.546 [2024-07-10 14:39:05.803233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.546 [2024-07-10 14:39:05.803402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.546 [2024-07-10 14:39:05.803442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.546 [2024-07-10 14:39:05.803466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.547 [2024-07-10 14:39:05.803485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.547 [2024-07-10 14:39:05.803526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.547 qpair failed and we were unable to recover it. 00:36:56.547 [2024-07-10 14:39:05.813272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.547 [2024-07-10 14:39:05.813454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.547 [2024-07-10 14:39:05.813487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.547 [2024-07-10 14:39:05.813510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.547 [2024-07-10 14:39:05.813528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.547 [2024-07-10 14:39:05.813569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.547 qpair failed and we were unable to recover it. 00:36:56.547 [2024-07-10 14:39:05.823291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.547 [2024-07-10 14:39:05.823464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.547 [2024-07-10 14:39:05.823497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.547 [2024-07-10 14:39:05.823520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.547 [2024-07-10 14:39:05.823539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.547 [2024-07-10 14:39:05.823579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.547 qpair failed and we were unable to recover it. 00:36:56.547 [2024-07-10 14:39:05.833410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.547 [2024-07-10 14:39:05.833604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.547 [2024-07-10 14:39:05.833641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.547 [2024-07-10 14:39:05.833665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.547 [2024-07-10 14:39:05.833685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.547 [2024-07-10 14:39:05.833726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.547 qpair failed and we were unable to recover it. 00:36:56.547 [2024-07-10 14:39:05.843365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.547 [2024-07-10 14:39:05.843583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.547 [2024-07-10 14:39:05.843616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.547 [2024-07-10 14:39:05.843639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.547 [2024-07-10 14:39:05.843657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.547 [2024-07-10 14:39:05.843699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.547 qpair failed and we were unable to recover it. 00:36:56.547 [2024-07-10 14:39:05.853388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.547 [2024-07-10 14:39:05.853565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.547 [2024-07-10 14:39:05.853598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.547 [2024-07-10 14:39:05.853620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.547 [2024-07-10 14:39:05.853639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.547 [2024-07-10 14:39:05.853680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.547 qpair failed and we were unable to recover it. 00:36:56.547 [2024-07-10 14:39:05.863397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.547 [2024-07-10 14:39:05.863580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.547 [2024-07-10 14:39:05.863613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.547 [2024-07-10 14:39:05.863636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.547 [2024-07-10 14:39:05.863654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.547 [2024-07-10 14:39:05.863696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.547 qpair failed and we were unable to recover it. 00:36:56.547 [2024-07-10 14:39:05.873500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.547 [2024-07-10 14:39:05.873717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.547 [2024-07-10 14:39:05.873750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.547 [2024-07-10 14:39:05.873773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.547 [2024-07-10 14:39:05.873791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.547 [2024-07-10 14:39:05.873832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.547 qpair failed and we were unable to recover it. 00:36:56.547 [2024-07-10 14:39:05.883527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.547 [2024-07-10 14:39:05.883701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.547 [2024-07-10 14:39:05.883734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.547 [2024-07-10 14:39:05.883762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.547 [2024-07-10 14:39:05.883782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.547 [2024-07-10 14:39:05.883824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.547 qpair failed and we were unable to recover it. 00:36:56.547 [2024-07-10 14:39:05.893543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.547 [2024-07-10 14:39:05.893719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.547 [2024-07-10 14:39:05.893752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.547 [2024-07-10 14:39:05.893774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.547 [2024-07-10 14:39:05.893793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.547 [2024-07-10 14:39:05.893834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.547 qpair failed and we were unable to recover it. 00:36:56.547 [2024-07-10 14:39:05.903485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.547 [2024-07-10 14:39:05.903653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.547 [2024-07-10 14:39:05.903698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.547 [2024-07-10 14:39:05.903722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.547 [2024-07-10 14:39:05.903741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.547 [2024-07-10 14:39:05.903782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.547 qpair failed and we were unable to recover it. 00:36:56.547 [2024-07-10 14:39:05.913576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.547 [2024-07-10 14:39:05.913747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.547 [2024-07-10 14:39:05.913779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.548 [2024-07-10 14:39:05.913802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.548 [2024-07-10 14:39:05.913821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.548 [2024-07-10 14:39:05.913862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.548 qpair failed and we were unable to recover it. 00:36:56.548 [2024-07-10 14:39:05.923627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.548 [2024-07-10 14:39:05.923872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.548 [2024-07-10 14:39:05.923935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.548 [2024-07-10 14:39:05.923960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.548 [2024-07-10 14:39:05.923979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.548 [2024-07-10 14:39:05.924021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.548 qpair failed and we were unable to recover it. 00:36:56.548 [2024-07-10 14:39:05.933647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.548 [2024-07-10 14:39:05.933813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.548 [2024-07-10 14:39:05.933847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.548 [2024-07-10 14:39:05.933871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.548 [2024-07-10 14:39:05.933890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.548 [2024-07-10 14:39:05.933931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.548 qpair failed and we were unable to recover it. 00:36:56.548 [2024-07-10 14:39:05.943665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.548 [2024-07-10 14:39:05.943822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.548 [2024-07-10 14:39:05.943855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.548 [2024-07-10 14:39:05.943878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.548 [2024-07-10 14:39:05.943897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.548 [2024-07-10 14:39:05.943937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.548 qpair failed and we were unable to recover it. 00:36:56.548 [2024-07-10 14:39:05.953651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.548 [2024-07-10 14:39:05.953828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.548 [2024-07-10 14:39:05.953861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.548 [2024-07-10 14:39:05.953884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.548 [2024-07-10 14:39:05.953903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.548 [2024-07-10 14:39:05.953944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.548 qpair failed and we were unable to recover it. 00:36:56.548 [2024-07-10 14:39:05.963710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.548 [2024-07-10 14:39:05.963881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.548 [2024-07-10 14:39:05.963913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.548 [2024-07-10 14:39:05.963936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.548 [2024-07-10 14:39:05.963954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.548 [2024-07-10 14:39:05.963995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.548 qpair failed and we were unable to recover it. 00:36:56.548 [2024-07-10 14:39:05.973818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.548 [2024-07-10 14:39:05.974007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.548 [2024-07-10 14:39:05.974047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.548 [2024-07-10 14:39:05.974071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.548 [2024-07-10 14:39:05.974090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.548 [2024-07-10 14:39:05.974132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.548 qpair failed and we were unable to recover it. 00:36:56.548 [2024-07-10 14:39:05.983774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.548 [2024-07-10 14:39:05.983963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.548 [2024-07-10 14:39:05.983997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.548 [2024-07-10 14:39:05.984020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.548 [2024-07-10 14:39:05.984039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.548 [2024-07-10 14:39:05.984080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.548 qpair failed and we were unable to recover it. 00:36:56.548 [2024-07-10 14:39:05.993800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.548 [2024-07-10 14:39:05.993974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.548 [2024-07-10 14:39:05.994008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.548 [2024-07-10 14:39:05.994031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.548 [2024-07-10 14:39:05.994049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.548 [2024-07-10 14:39:05.994090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.548 qpair failed and we were unable to recover it. 00:36:56.548 [2024-07-10 14:39:06.003839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.548 [2024-07-10 14:39:06.004006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.548 [2024-07-10 14:39:06.004040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.548 [2024-07-10 14:39:06.004062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.548 [2024-07-10 14:39:06.004081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.548 [2024-07-10 14:39:06.004121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.548 qpair failed and we were unable to recover it. 00:36:56.548 [2024-07-10 14:39:06.013880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.548 [2024-07-10 14:39:06.014097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.548 [2024-07-10 14:39:06.014130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.548 [2024-07-10 14:39:06.014153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.548 [2024-07-10 14:39:06.014171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.548 [2024-07-10 14:39:06.014217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.548 qpair failed and we were unable to recover it. 00:36:56.548 [2024-07-10 14:39:06.023893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.548 [2024-07-10 14:39:06.024072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.548 [2024-07-10 14:39:06.024115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.548 [2024-07-10 14:39:06.024140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.548 [2024-07-10 14:39:06.024159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.548 [2024-07-10 14:39:06.024222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.548 qpair failed and we were unable to recover it. 00:36:56.806 [2024-07-10 14:39:06.033971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.806 [2024-07-10 14:39:06.034188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.806 [2024-07-10 14:39:06.034222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.806 [2024-07-10 14:39:06.034245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.807 [2024-07-10 14:39:06.034263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.807 [2024-07-10 14:39:06.034306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.807 qpair failed and we were unable to recover it. 00:36:56.807 [2024-07-10 14:39:06.043975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.807 [2024-07-10 14:39:06.044150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.807 [2024-07-10 14:39:06.044184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.807 [2024-07-10 14:39:06.044206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.807 [2024-07-10 14:39:06.044224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.807 [2024-07-10 14:39:06.044265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.807 qpair failed and we were unable to recover it. 00:36:56.807 [2024-07-10 14:39:06.053993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.807 [2024-07-10 14:39:06.054157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.807 [2024-07-10 14:39:06.054191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.807 [2024-07-10 14:39:06.054214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.807 [2024-07-10 14:39:06.054233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.807 [2024-07-10 14:39:06.054274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.807 qpair failed and we were unable to recover it. 00:36:56.807 [2024-07-10 14:39:06.063976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.807 [2024-07-10 14:39:06.064138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.807 [2024-07-10 14:39:06.064176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.807 [2024-07-10 14:39:06.064199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.807 [2024-07-10 14:39:06.064218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.807 [2024-07-10 14:39:06.064259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.807 qpair failed and we were unable to recover it. 00:36:56.807 [2024-07-10 14:39:06.074081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.807 [2024-07-10 14:39:06.074301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.807 [2024-07-10 14:39:06.074334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.807 [2024-07-10 14:39:06.074357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.807 [2024-07-10 14:39:06.074375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.807 [2024-07-10 14:39:06.074416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.807 qpair failed and we were unable to recover it. 00:36:56.807 [2024-07-10 14:39:06.084037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.807 [2024-07-10 14:39:06.084215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.807 [2024-07-10 14:39:06.084248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.807 [2024-07-10 14:39:06.084271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.807 [2024-07-10 14:39:06.084290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.807 [2024-07-10 14:39:06.084331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.807 qpair failed and we were unable to recover it. 00:36:56.807 [2024-07-10 14:39:06.094160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.807 [2024-07-10 14:39:06.094332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.807 [2024-07-10 14:39:06.094365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.807 [2024-07-10 14:39:06.094389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.807 [2024-07-10 14:39:06.094408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.807 [2024-07-10 14:39:06.094461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.807 qpair failed and we were unable to recover it. 00:36:56.807 [2024-07-10 14:39:06.104188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.807 [2024-07-10 14:39:06.104356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.807 [2024-07-10 14:39:06.104390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.807 [2024-07-10 14:39:06.104413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.807 [2024-07-10 14:39:06.104446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.807 [2024-07-10 14:39:06.104489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.807 qpair failed and we were unable to recover it. 00:36:56.807 [2024-07-10 14:39:06.114141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.807 [2024-07-10 14:39:06.114311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.807 [2024-07-10 14:39:06.114344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.807 [2024-07-10 14:39:06.114367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.807 [2024-07-10 14:39:06.114385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.807 [2024-07-10 14:39:06.114440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.807 qpair failed and we were unable to recover it. 00:36:56.807 [2024-07-10 14:39:06.124178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.807 [2024-07-10 14:39:06.124347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.807 [2024-07-10 14:39:06.124380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.807 [2024-07-10 14:39:06.124402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.807 [2024-07-10 14:39:06.124420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.807 [2024-07-10 14:39:06.124473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.807 qpair failed and we were unable to recover it. 00:36:56.807 [2024-07-10 14:39:06.134237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.807 [2024-07-10 14:39:06.134406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.807 [2024-07-10 14:39:06.134454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.807 [2024-07-10 14:39:06.134478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.807 [2024-07-10 14:39:06.134497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.807 [2024-07-10 14:39:06.134538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.807 qpair failed and we were unable to recover it. 00:36:56.807 [2024-07-10 14:39:06.144392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.807 [2024-07-10 14:39:06.144583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.808 [2024-07-10 14:39:06.144620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.808 [2024-07-10 14:39:06.144643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.808 [2024-07-10 14:39:06.144662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.808 [2024-07-10 14:39:06.144703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.808 qpair failed and we were unable to recover it. 00:36:56.808 [2024-07-10 14:39:06.154281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.808 [2024-07-10 14:39:06.154498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.808 [2024-07-10 14:39:06.154531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.808 [2024-07-10 14:39:06.154554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.808 [2024-07-10 14:39:06.154573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.808 [2024-07-10 14:39:06.154629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.808 qpair failed and we were unable to recover it. 00:36:56.808 [2024-07-10 14:39:06.164285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.808 [2024-07-10 14:39:06.164464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.808 [2024-07-10 14:39:06.164496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.808 [2024-07-10 14:39:06.164519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.808 [2024-07-10 14:39:06.164539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.808 [2024-07-10 14:39:06.164579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.808 qpair failed and we were unable to recover it. 00:36:56.808 [2024-07-10 14:39:06.174415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.808 [2024-07-10 14:39:06.174645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.808 [2024-07-10 14:39:06.174678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.808 [2024-07-10 14:39:06.174700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.808 [2024-07-10 14:39:06.174719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.808 [2024-07-10 14:39:06.174760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.808 qpair failed and we were unable to recover it. 00:36:56.808 [2024-07-10 14:39:06.184364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.808 [2024-07-10 14:39:06.184552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.808 [2024-07-10 14:39:06.184585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.808 [2024-07-10 14:39:06.184608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.808 [2024-07-10 14:39:06.184627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.808 [2024-07-10 14:39:06.184668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.808 qpair failed and we were unable to recover it. 00:36:56.808 [2024-07-10 14:39:06.194411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.808 [2024-07-10 14:39:06.194605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.808 [2024-07-10 14:39:06.194638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.808 [2024-07-10 14:39:06.194661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.808 [2024-07-10 14:39:06.194685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.808 [2024-07-10 14:39:06.194727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.808 qpair failed and we were unable to recover it. 00:36:56.808 [2024-07-10 14:39:06.204449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.808 [2024-07-10 14:39:06.204666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.808 [2024-07-10 14:39:06.204699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.808 [2024-07-10 14:39:06.204722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.808 [2024-07-10 14:39:06.204741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.808 [2024-07-10 14:39:06.204782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.808 qpair failed and we were unable to recover it. 00:36:56.808 [2024-07-10 14:39:06.214500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.808 [2024-07-10 14:39:06.214667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.808 [2024-07-10 14:39:06.214700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.808 [2024-07-10 14:39:06.214723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.808 [2024-07-10 14:39:06.214743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.808 [2024-07-10 14:39:06.214784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.808 qpair failed and we were unable to recover it. 00:36:56.808 [2024-07-10 14:39:06.224450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.808 [2024-07-10 14:39:06.224611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.808 [2024-07-10 14:39:06.224644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.808 [2024-07-10 14:39:06.224667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.808 [2024-07-10 14:39:06.224686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.808 [2024-07-10 14:39:06.224727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.808 qpair failed and we were unable to recover it. 00:36:56.808 [2024-07-10 14:39:06.234512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.808 [2024-07-10 14:39:06.234685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.808 [2024-07-10 14:39:06.234718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.808 [2024-07-10 14:39:06.234741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.808 [2024-07-10 14:39:06.234759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.808 [2024-07-10 14:39:06.234800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.808 qpair failed and we were unable to recover it. 00:36:56.808 [2024-07-10 14:39:06.244607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.808 [2024-07-10 14:39:06.244784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.808 [2024-07-10 14:39:06.244818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.808 [2024-07-10 14:39:06.244840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.808 [2024-07-10 14:39:06.244858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.808 [2024-07-10 14:39:06.244899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.808 qpair failed and we were unable to recover it. 00:36:56.808 [2024-07-10 14:39:06.254643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.808 [2024-07-10 14:39:06.254813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.809 [2024-07-10 14:39:06.254846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.809 [2024-07-10 14:39:06.254869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.809 [2024-07-10 14:39:06.254888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.809 [2024-07-10 14:39:06.254929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.809 qpair failed and we were unable to recover it. 00:36:56.809 [2024-07-10 14:39:06.264605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.809 [2024-07-10 14:39:06.264821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.809 [2024-07-10 14:39:06.264854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.809 [2024-07-10 14:39:06.264877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.809 [2024-07-10 14:39:06.264896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.809 [2024-07-10 14:39:06.264937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.809 qpair failed and we were unable to recover it. 00:36:56.809 [2024-07-10 14:39:06.274617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.809 [2024-07-10 14:39:06.274794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.809 [2024-07-10 14:39:06.274828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.809 [2024-07-10 14:39:06.274851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.809 [2024-07-10 14:39:06.274870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.809 [2024-07-10 14:39:06.274910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.809 qpair failed and we were unable to recover it. 00:36:56.809 [2024-07-10 14:39:06.284838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.809 [2024-07-10 14:39:06.285000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.809 [2024-07-10 14:39:06.285035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.809 [2024-07-10 14:39:06.285064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.809 [2024-07-10 14:39:06.285084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:56.809 [2024-07-10 14:39:06.285126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:56.809 qpair failed and we were unable to recover it. 00:36:57.068 [2024-07-10 14:39:06.294745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.068 [2024-07-10 14:39:06.294917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.068 [2024-07-10 14:39:06.294952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.068 [2024-07-10 14:39:06.294975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.068 [2024-07-10 14:39:06.294994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.068 [2024-07-10 14:39:06.295035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.068 qpair failed and we were unable to recover it. 00:36:57.068 [2024-07-10 14:39:06.304715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.068 [2024-07-10 14:39:06.304908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.068 [2024-07-10 14:39:06.304942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.068 [2024-07-10 14:39:06.304965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.068 [2024-07-10 14:39:06.304984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.068 [2024-07-10 14:39:06.305025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.068 qpair failed and we were unable to recover it. 00:36:57.068 [2024-07-10 14:39:06.315009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.068 [2024-07-10 14:39:06.315190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.068 [2024-07-10 14:39:06.315230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.068 [2024-07-10 14:39:06.315252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.068 [2024-07-10 14:39:06.315271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.068 [2024-07-10 14:39:06.315311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.068 qpair failed and we were unable to recover it. 00:36:57.068 [2024-07-10 14:39:06.324732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.068 [2024-07-10 14:39:06.324917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.068 [2024-07-10 14:39:06.324950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.068 [2024-07-10 14:39:06.324973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.068 [2024-07-10 14:39:06.324992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.068 [2024-07-10 14:39:06.325032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.068 qpair failed and we were unable to recover it. 00:36:57.068 [2024-07-10 14:39:06.334848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.068 [2024-07-10 14:39:06.335012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.068 [2024-07-10 14:39:06.335045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.068 [2024-07-10 14:39:06.335067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.068 [2024-07-10 14:39:06.335085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.068 [2024-07-10 14:39:06.335125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.068 qpair failed and we were unable to recover it. 00:36:57.068 [2024-07-10 14:39:06.344836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.068 [2024-07-10 14:39:06.345004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.068 [2024-07-10 14:39:06.345037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.068 [2024-07-10 14:39:06.345060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.068 [2024-07-10 14:39:06.345078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.068 [2024-07-10 14:39:06.345118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.068 qpair failed and we were unable to recover it. 00:36:57.068 [2024-07-10 14:39:06.354838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.068 [2024-07-10 14:39:06.355021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.069 [2024-07-10 14:39:06.355055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.069 [2024-07-10 14:39:06.355077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.069 [2024-07-10 14:39:06.355096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.069 [2024-07-10 14:39:06.355147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.069 qpair failed and we were unable to recover it. 00:36:57.069 [2024-07-10 14:39:06.364922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.069 [2024-07-10 14:39:06.365093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.069 [2024-07-10 14:39:06.365126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.069 [2024-07-10 14:39:06.365148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.069 [2024-07-10 14:39:06.365167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.069 [2024-07-10 14:39:06.365207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.069 qpair failed and we were unable to recover it. 00:36:57.069 [2024-07-10 14:39:06.374933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.069 [2024-07-10 14:39:06.375103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.069 [2024-07-10 14:39:06.375141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.069 [2024-07-10 14:39:06.375168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.069 [2024-07-10 14:39:06.375187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.069 [2024-07-10 14:39:06.375228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.069 qpair failed and we were unable to recover it. 00:36:57.069 [2024-07-10 14:39:06.384963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.069 [2024-07-10 14:39:06.385146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.069 [2024-07-10 14:39:06.385179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.069 [2024-07-10 14:39:06.385201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.069 [2024-07-10 14:39:06.385220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.069 [2024-07-10 14:39:06.385260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.069 qpair failed and we were unable to recover it. 00:36:57.069 [2024-07-10 14:39:06.394987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.069 [2024-07-10 14:39:06.395202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.069 [2024-07-10 14:39:06.395236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.069 [2024-07-10 14:39:06.395258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.069 [2024-07-10 14:39:06.395276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.069 [2024-07-10 14:39:06.395317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.069 qpair failed and we were unable to recover it. 00:36:57.069 [2024-07-10 14:39:06.405000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.069 [2024-07-10 14:39:06.405180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.069 [2024-07-10 14:39:06.405213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.069 [2024-07-10 14:39:06.405236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.069 [2024-07-10 14:39:06.405254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.069 [2024-07-10 14:39:06.405294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.069 qpair failed and we were unable to recover it. 00:36:57.069 [2024-07-10 14:39:06.415063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.069 [2024-07-10 14:39:06.415233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.069 [2024-07-10 14:39:06.415266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.069 [2024-07-10 14:39:06.415303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.069 [2024-07-10 14:39:06.415322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.069 [2024-07-10 14:39:06.415367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.069 qpair failed and we were unable to recover it. 00:36:57.069 [2024-07-10 14:39:06.425056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.069 [2024-07-10 14:39:06.425224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.069 [2024-07-10 14:39:06.425258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.069 [2024-07-10 14:39:06.425281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.069 [2024-07-10 14:39:06.425300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.069 [2024-07-10 14:39:06.425340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.069 qpair failed and we were unable to recover it. 00:36:57.069 [2024-07-10 14:39:06.435129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.069 [2024-07-10 14:39:06.435316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.069 [2024-07-10 14:39:06.435349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.069 [2024-07-10 14:39:06.435371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.069 [2024-07-10 14:39:06.435390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.069 [2024-07-10 14:39:06.435438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.069 qpair failed and we were unable to recover it. 00:36:57.069 [2024-07-10 14:39:06.445166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.069 [2024-07-10 14:39:06.445341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.069 [2024-07-10 14:39:06.445374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.069 [2024-07-10 14:39:06.445397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.069 [2024-07-10 14:39:06.445416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.069 [2024-07-10 14:39:06.445473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.069 qpair failed and we were unable to recover it. 00:36:57.069 [2024-07-10 14:39:06.455212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.069 [2024-07-10 14:39:06.455380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.069 [2024-07-10 14:39:06.455413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.069 [2024-07-10 14:39:06.455449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.069 [2024-07-10 14:39:06.455469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.069 [2024-07-10 14:39:06.455510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.069 qpair failed and we were unable to recover it. 00:36:57.069 [2024-07-10 14:39:06.465158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.069 [2024-07-10 14:39:06.465336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.069 [2024-07-10 14:39:06.465374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.069 [2024-07-10 14:39:06.465397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.069 [2024-07-10 14:39:06.465416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.069 [2024-07-10 14:39:06.465476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.070 qpair failed and we were unable to recover it. 00:36:57.070 [2024-07-10 14:39:06.475235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.070 [2024-07-10 14:39:06.475462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.070 [2024-07-10 14:39:06.475501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.070 [2024-07-10 14:39:06.475523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.070 [2024-07-10 14:39:06.475542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.070 [2024-07-10 14:39:06.475583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.070 qpair failed and we were unable to recover it. 00:36:57.070 [2024-07-10 14:39:06.485240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.070 [2024-07-10 14:39:06.485440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.070 [2024-07-10 14:39:06.485473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.070 [2024-07-10 14:39:06.485496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.070 [2024-07-10 14:39:06.485514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.070 [2024-07-10 14:39:06.485555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.070 qpair failed and we were unable to recover it. 00:36:57.070 [2024-07-10 14:39:06.495267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.070 [2024-07-10 14:39:06.495451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.070 [2024-07-10 14:39:06.495487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.070 [2024-07-10 14:39:06.495509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.070 [2024-07-10 14:39:06.495528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.070 [2024-07-10 14:39:06.495569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.070 qpair failed and we were unable to recover it. 00:36:57.070 [2024-07-10 14:39:06.505284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.070 [2024-07-10 14:39:06.505459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.070 [2024-07-10 14:39:06.505492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.070 [2024-07-10 14:39:06.505515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.070 [2024-07-10 14:39:06.505533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.070 [2024-07-10 14:39:06.505585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.070 qpair failed and we were unable to recover it. 00:36:57.070 [2024-07-10 14:39:06.515336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.070 [2024-07-10 14:39:06.515516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.070 [2024-07-10 14:39:06.515559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.070 [2024-07-10 14:39:06.515581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.070 [2024-07-10 14:39:06.515599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.070 [2024-07-10 14:39:06.515640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.070 qpair failed and we were unable to recover it. 00:36:57.070 [2024-07-10 14:39:06.525375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.070 [2024-07-10 14:39:06.525552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.070 [2024-07-10 14:39:06.525585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.070 [2024-07-10 14:39:06.525607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.070 [2024-07-10 14:39:06.525625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.070 [2024-07-10 14:39:06.525665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.070 qpair failed and we were unable to recover it. 00:36:57.070 [2024-07-10 14:39:06.535410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.070 [2024-07-10 14:39:06.535588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.070 [2024-07-10 14:39:06.535621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.070 [2024-07-10 14:39:06.535643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.070 [2024-07-10 14:39:06.535662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.070 [2024-07-10 14:39:06.535702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.070 qpair failed and we were unable to recover it. 00:36:57.070 [2024-07-10 14:39:06.545372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.070 [2024-07-10 14:39:06.545543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.070 [2024-07-10 14:39:06.545579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.070 [2024-07-10 14:39:06.545604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.070 [2024-07-10 14:39:06.545640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.070 [2024-07-10 14:39:06.545701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.070 qpair failed and we were unable to recover it. 00:36:57.330 [2024-07-10 14:39:06.555461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.330 [2024-07-10 14:39:06.555646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.330 [2024-07-10 14:39:06.555681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.330 [2024-07-10 14:39:06.555704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.330 [2024-07-10 14:39:06.555722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.330 [2024-07-10 14:39:06.555765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.330 qpair failed and we were unable to recover it. 00:36:57.330 [2024-07-10 14:39:06.565472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.330 [2024-07-10 14:39:06.565640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.330 [2024-07-10 14:39:06.565678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.330 [2024-07-10 14:39:06.565701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.330 [2024-07-10 14:39:06.565719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.330 [2024-07-10 14:39:06.565760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.330 qpair failed and we were unable to recover it. 00:36:57.330 [2024-07-10 14:39:06.575545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.330 [2024-07-10 14:39:06.575713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.330 [2024-07-10 14:39:06.575746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.330 [2024-07-10 14:39:06.575769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.330 [2024-07-10 14:39:06.575787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.330 [2024-07-10 14:39:06.575828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.330 qpair failed and we were unable to recover it. 00:36:57.330 [2024-07-10 14:39:06.585492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.330 [2024-07-10 14:39:06.585652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.330 [2024-07-10 14:39:06.585685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.330 [2024-07-10 14:39:06.585707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.330 [2024-07-10 14:39:06.585726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.330 [2024-07-10 14:39:06.585767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.330 qpair failed and we were unable to recover it. 00:36:57.330 [2024-07-10 14:39:06.595547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.330 [2024-07-10 14:39:06.595725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.330 [2024-07-10 14:39:06.595759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.330 [2024-07-10 14:39:06.595785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.330 [2024-07-10 14:39:06.595810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.330 [2024-07-10 14:39:06.595852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.330 qpair failed and we were unable to recover it. 00:36:57.330 [2024-07-10 14:39:06.605594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.330 [2024-07-10 14:39:06.605780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.331 [2024-07-10 14:39:06.605813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.331 [2024-07-10 14:39:06.605836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.331 [2024-07-10 14:39:06.605855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.331 [2024-07-10 14:39:06.605896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.331 qpair failed and we were unable to recover it. 00:36:57.331 [2024-07-10 14:39:06.615654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.331 [2024-07-10 14:39:06.615844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.331 [2024-07-10 14:39:06.615877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.331 [2024-07-10 14:39:06.615900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.331 [2024-07-10 14:39:06.615920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.331 [2024-07-10 14:39:06.615961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.331 qpair failed and we were unable to recover it. 00:36:57.331 [2024-07-10 14:39:06.625690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.331 [2024-07-10 14:39:06.625868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.331 [2024-07-10 14:39:06.625901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.331 [2024-07-10 14:39:06.625924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.331 [2024-07-10 14:39:06.625943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.331 [2024-07-10 14:39:06.625983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.331 qpair failed and we were unable to recover it. 00:36:57.331 [2024-07-10 14:39:06.635669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.331 [2024-07-10 14:39:06.635882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.331 [2024-07-10 14:39:06.635914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.331 [2024-07-10 14:39:06.635936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.331 [2024-07-10 14:39:06.635954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.331 [2024-07-10 14:39:06.635994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.331 qpair failed and we were unable to recover it. 00:36:57.331 [2024-07-10 14:39:06.645692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.331 [2024-07-10 14:39:06.645860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.331 [2024-07-10 14:39:06.645893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.331 [2024-07-10 14:39:06.645915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.331 [2024-07-10 14:39:06.645932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.331 [2024-07-10 14:39:06.645971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.331 qpair failed and we were unable to recover it. 00:36:57.331 [2024-07-10 14:39:06.655765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.331 [2024-07-10 14:39:06.655953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.331 [2024-07-10 14:39:06.655986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.331 [2024-07-10 14:39:06.656008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.331 [2024-07-10 14:39:06.656026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.331 [2024-07-10 14:39:06.656067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.331 qpair failed and we were unable to recover it. 00:36:57.331 [2024-07-10 14:39:06.665776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.331 [2024-07-10 14:39:06.665934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.331 [2024-07-10 14:39:06.665967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.331 [2024-07-10 14:39:06.665989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.331 [2024-07-10 14:39:06.666008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.331 [2024-07-10 14:39:06.666048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.331 qpair failed and we were unable to recover it. 00:36:57.331 [2024-07-10 14:39:06.675766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.331 [2024-07-10 14:39:06.675979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.331 [2024-07-10 14:39:06.676012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.331 [2024-07-10 14:39:06.676034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.331 [2024-07-10 14:39:06.676053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.331 [2024-07-10 14:39:06.676092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.331 qpair failed and we were unable to recover it. 00:36:57.331 [2024-07-10 14:39:06.685870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.331 [2024-07-10 14:39:06.686053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.331 [2024-07-10 14:39:06.686085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.331 [2024-07-10 14:39:06.686113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.331 [2024-07-10 14:39:06.686133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.331 [2024-07-10 14:39:06.686173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.331 qpair failed and we were unable to recover it. 00:36:57.331 [2024-07-10 14:39:06.695833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.331 [2024-07-10 14:39:06.696013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.331 [2024-07-10 14:39:06.696045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.331 [2024-07-10 14:39:06.696067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.331 [2024-07-10 14:39:06.696086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.331 [2024-07-10 14:39:06.696125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.331 qpair failed and we were unable to recover it. 00:36:57.331 [2024-07-10 14:39:06.705816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.331 [2024-07-10 14:39:06.706027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.331 [2024-07-10 14:39:06.706065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.331 [2024-07-10 14:39:06.706088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.331 [2024-07-10 14:39:06.706106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.331 [2024-07-10 14:39:06.706146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.331 qpair failed and we were unable to recover it. 00:36:57.331 [2024-07-10 14:39:06.715925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.331 [2024-07-10 14:39:06.716089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.331 [2024-07-10 14:39:06.716122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.331 [2024-07-10 14:39:06.716144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.331 [2024-07-10 14:39:06.716163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.331 [2024-07-10 14:39:06.716204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.331 qpair failed and we were unable to recover it. 00:36:57.331 [2024-07-10 14:39:06.725906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.331 [2024-07-10 14:39:06.726125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.331 [2024-07-10 14:39:06.726158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.331 [2024-07-10 14:39:06.726180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.331 [2024-07-10 14:39:06.726199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.331 [2024-07-10 14:39:06.726240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.331 qpair failed and we were unable to recover it. 00:36:57.331 [2024-07-10 14:39:06.735967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.331 [2024-07-10 14:39:06.736127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.331 [2024-07-10 14:39:06.736159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.331 [2024-07-10 14:39:06.736180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.331 [2024-07-10 14:39:06.736198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.331 [2024-07-10 14:39:06.736237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.331 qpair failed and we were unable to recover it. 00:36:57.331 [2024-07-10 14:39:06.745989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.331 [2024-07-10 14:39:06.746144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.331 [2024-07-10 14:39:06.746177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.331 [2024-07-10 14:39:06.746200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.332 [2024-07-10 14:39:06.746218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.332 [2024-07-10 14:39:06.746259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.332 qpair failed and we were unable to recover it. 00:36:57.332 [2024-07-10 14:39:06.756005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.332 [2024-07-10 14:39:06.756221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.332 [2024-07-10 14:39:06.756254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.332 [2024-07-10 14:39:06.756277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.332 [2024-07-10 14:39:06.756296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.332 [2024-07-10 14:39:06.756337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.332 qpair failed and we were unable to recover it. 00:36:57.332 [2024-07-10 14:39:06.766050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.332 [2024-07-10 14:39:06.766240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.332 [2024-07-10 14:39:06.766273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.332 [2024-07-10 14:39:06.766296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.332 [2024-07-10 14:39:06.766315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.332 [2024-07-10 14:39:06.766355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.332 qpair failed and we were unable to recover it. 00:36:57.332 [2024-07-10 14:39:06.776130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.332 [2024-07-10 14:39:06.776307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.332 [2024-07-10 14:39:06.776341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.332 [2024-07-10 14:39:06.776370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.332 [2024-07-10 14:39:06.776390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.332 [2024-07-10 14:39:06.776438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.332 qpair failed and we were unable to recover it. 00:36:57.332 [2024-07-10 14:39:06.786070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.332 [2024-07-10 14:39:06.786242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.332 [2024-07-10 14:39:06.786275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.332 [2024-07-10 14:39:06.786298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.332 [2024-07-10 14:39:06.786316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.332 [2024-07-10 14:39:06.786357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.332 qpair failed and we were unable to recover it. 00:36:57.332 [2024-07-10 14:39:06.796154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.332 [2024-07-10 14:39:06.796336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.332 [2024-07-10 14:39:06.796369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.332 [2024-07-10 14:39:06.796392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.332 [2024-07-10 14:39:06.796411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.332 [2024-07-10 14:39:06.796460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.332 qpair failed and we were unable to recover it. 00:36:57.332 [2024-07-10 14:39:06.806147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.332 [2024-07-10 14:39:06.806317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.332 [2024-07-10 14:39:06.806352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.332 [2024-07-10 14:39:06.806375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.332 [2024-07-10 14:39:06.806394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.332 [2024-07-10 14:39:06.806446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.332 qpair failed and we were unable to recover it. 00:36:57.591 [2024-07-10 14:39:06.816257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.591 [2024-07-10 14:39:06.816448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.591 [2024-07-10 14:39:06.816484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.591 [2024-07-10 14:39:06.816507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.591 [2024-07-10 14:39:06.816526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.591 [2024-07-10 14:39:06.816567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.592 qpair failed and we were unable to recover it. 00:36:57.592 [2024-07-10 14:39:06.826267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.592 [2024-07-10 14:39:06.826468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.592 [2024-07-10 14:39:06.826502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.592 [2024-07-10 14:39:06.826525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.592 [2024-07-10 14:39:06.826544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.592 [2024-07-10 14:39:06.826586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.592 qpair failed and we were unable to recover it. 00:36:57.592 [2024-07-10 14:39:06.836292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.592 [2024-07-10 14:39:06.836494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.592 [2024-07-10 14:39:06.836528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.592 [2024-07-10 14:39:06.836550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.592 [2024-07-10 14:39:06.836569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.592 [2024-07-10 14:39:06.836611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.592 qpair failed and we were unable to recover it. 00:36:57.592 [2024-07-10 14:39:06.846324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.592 [2024-07-10 14:39:06.846501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.592 [2024-07-10 14:39:06.846538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.592 [2024-07-10 14:39:06.846562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.592 [2024-07-10 14:39:06.846581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.592 [2024-07-10 14:39:06.846623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.592 qpair failed and we were unable to recover it. 00:36:57.592 [2024-07-10 14:39:06.856339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.592 [2024-07-10 14:39:06.856522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.592 [2024-07-10 14:39:06.856556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.592 [2024-07-10 14:39:06.856578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.592 [2024-07-10 14:39:06.856596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.592 [2024-07-10 14:39:06.856637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.592 qpair failed and we were unable to recover it. 00:36:57.592 [2024-07-10 14:39:06.866346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.592 [2024-07-10 14:39:06.866523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.592 [2024-07-10 14:39:06.866562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.592 [2024-07-10 14:39:06.866586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.592 [2024-07-10 14:39:06.866604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.592 [2024-07-10 14:39:06.866645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.592 qpair failed and we were unable to recover it. 00:36:57.592 [2024-07-10 14:39:06.876410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.592 [2024-07-10 14:39:06.876602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.592 [2024-07-10 14:39:06.876636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.592 [2024-07-10 14:39:06.876659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.592 [2024-07-10 14:39:06.876677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.592 [2024-07-10 14:39:06.876719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.592 qpair failed and we were unable to recover it. 00:36:57.592 [2024-07-10 14:39:06.886361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.592 [2024-07-10 14:39:06.886524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.592 [2024-07-10 14:39:06.886557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.592 [2024-07-10 14:39:06.886580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.592 [2024-07-10 14:39:06.886598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.592 [2024-07-10 14:39:06.886639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.592 qpair failed and we were unable to recover it. 00:36:57.592 [2024-07-10 14:39:06.896617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.592 [2024-07-10 14:39:06.896824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.592 [2024-07-10 14:39:06.896857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.592 [2024-07-10 14:39:06.896880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.592 [2024-07-10 14:39:06.896899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.592 [2024-07-10 14:39:06.896946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.592 qpair failed and we were unable to recover it. 00:36:57.592 [2024-07-10 14:39:06.906477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.592 [2024-07-10 14:39:06.906632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.592 [2024-07-10 14:39:06.906665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.592 [2024-07-10 14:39:06.906688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.592 [2024-07-10 14:39:06.906706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.592 [2024-07-10 14:39:06.906753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.592 qpair failed and we were unable to recover it. 00:36:57.592 [2024-07-10 14:39:06.916525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.592 [2024-07-10 14:39:06.916718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.593 [2024-07-10 14:39:06.916750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.593 [2024-07-10 14:39:06.916773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.593 [2024-07-10 14:39:06.916792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.593 [2024-07-10 14:39:06.916833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.593 qpair failed and we were unable to recover it. 00:36:57.593 [2024-07-10 14:39:06.926539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.593 [2024-07-10 14:39:06.926714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.593 [2024-07-10 14:39:06.926747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.593 [2024-07-10 14:39:06.926770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.593 [2024-07-10 14:39:06.926802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.593 [2024-07-10 14:39:06.926843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.593 qpair failed and we were unable to recover it. 00:36:57.593 [2024-07-10 14:39:06.936586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.593 [2024-07-10 14:39:06.936761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.593 [2024-07-10 14:39:06.936798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.593 [2024-07-10 14:39:06.936821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.593 [2024-07-10 14:39:06.936839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.593 [2024-07-10 14:39:06.936880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.593 qpair failed and we were unable to recover it. 00:36:57.593 [2024-07-10 14:39:06.946605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.593 [2024-07-10 14:39:06.946776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.593 [2024-07-10 14:39:06.946809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.593 [2024-07-10 14:39:06.946832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.593 [2024-07-10 14:39:06.946851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.593 [2024-07-10 14:39:06.946891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.593 qpair failed and we were unable to recover it. 00:36:57.593 [2024-07-10 14:39:06.956658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.593 [2024-07-10 14:39:06.956831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.593 [2024-07-10 14:39:06.956870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.593 [2024-07-10 14:39:06.956894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.593 [2024-07-10 14:39:06.956912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.593 [2024-07-10 14:39:06.956953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.593 qpair failed and we were unable to recover it. 00:36:57.593 [2024-07-10 14:39:06.966664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.593 [2024-07-10 14:39:06.966843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.593 [2024-07-10 14:39:06.966877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.593 [2024-07-10 14:39:06.966900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.593 [2024-07-10 14:39:06.966918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.593 [2024-07-10 14:39:06.966958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.593 qpair failed and we were unable to recover it. 00:36:57.593 [2024-07-10 14:39:06.976670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.593 [2024-07-10 14:39:06.976838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.593 [2024-07-10 14:39:06.976871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.593 [2024-07-10 14:39:06.976894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.593 [2024-07-10 14:39:06.976912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.593 [2024-07-10 14:39:06.976953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.593 qpair failed and we were unable to recover it. 00:36:57.593 [2024-07-10 14:39:06.986720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.593 [2024-07-10 14:39:06.986877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.593 [2024-07-10 14:39:06.986909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.593 [2024-07-10 14:39:06.986931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.593 [2024-07-10 14:39:06.986950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.593 [2024-07-10 14:39:06.986990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.593 qpair failed and we were unable to recover it. 00:36:57.593 [2024-07-10 14:39:06.996701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.593 [2024-07-10 14:39:06.996932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.593 [2024-07-10 14:39:06.996965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.593 [2024-07-10 14:39:06.996987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.593 [2024-07-10 14:39:06.997011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.593 [2024-07-10 14:39:06.997053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.593 qpair failed and we were unable to recover it. 00:36:57.593 [2024-07-10 14:39:07.006781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.593 [2024-07-10 14:39:07.006945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.593 [2024-07-10 14:39:07.006978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.593 [2024-07-10 14:39:07.007001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.593 [2024-07-10 14:39:07.007019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.593 [2024-07-10 14:39:07.007059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.593 qpair failed and we were unable to recover it. 00:36:57.593 [2024-07-10 14:39:07.016776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.593 [2024-07-10 14:39:07.016937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.593 [2024-07-10 14:39:07.016970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.593 [2024-07-10 14:39:07.016993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.593 [2024-07-10 14:39:07.017012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.593 [2024-07-10 14:39:07.017053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.593 qpair failed and we were unable to recover it. 00:36:57.593 [2024-07-10 14:39:07.026785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.593 [2024-07-10 14:39:07.026950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.593 [2024-07-10 14:39:07.026984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.594 [2024-07-10 14:39:07.027010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.594 [2024-07-10 14:39:07.027030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.594 [2024-07-10 14:39:07.027071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.594 qpair failed and we were unable to recover it. 00:36:57.594 [2024-07-10 14:39:07.036867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.594 [2024-07-10 14:39:07.037055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.594 [2024-07-10 14:39:07.037088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.594 [2024-07-10 14:39:07.037111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.594 [2024-07-10 14:39:07.037130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.594 [2024-07-10 14:39:07.037171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.594 qpair failed and we were unable to recover it. 00:36:57.594 [2024-07-10 14:39:07.046902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.594 [2024-07-10 14:39:07.047080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.594 [2024-07-10 14:39:07.047112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.594 [2024-07-10 14:39:07.047133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.594 [2024-07-10 14:39:07.047152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.594 [2024-07-10 14:39:07.047193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.594 qpair failed and we were unable to recover it. 00:36:57.594 [2024-07-10 14:39:07.056963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.594 [2024-07-10 14:39:07.057138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.594 [2024-07-10 14:39:07.057171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.594 [2024-07-10 14:39:07.057193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.594 [2024-07-10 14:39:07.057212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.594 [2024-07-10 14:39:07.057253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.594 qpair failed and we were unable to recover it. 00:36:57.594 [2024-07-10 14:39:07.066947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.594 [2024-07-10 14:39:07.067129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.594 [2024-07-10 14:39:07.067178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.594 [2024-07-10 14:39:07.067218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.594 [2024-07-10 14:39:07.067249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.594 [2024-07-10 14:39:07.067293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.594 qpair failed and we were unable to recover it. 00:36:57.853 [2024-07-10 14:39:07.076951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.853 [2024-07-10 14:39:07.077144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.853 [2024-07-10 14:39:07.077189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.853 [2024-07-10 14:39:07.077216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.853 [2024-07-10 14:39:07.077236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.853 [2024-07-10 14:39:07.077278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.853 qpair failed and we were unable to recover it. 00:36:57.853 [2024-07-10 14:39:07.087007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.853 [2024-07-10 14:39:07.087181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.853 [2024-07-10 14:39:07.087214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.853 [2024-07-10 14:39:07.087243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.853 [2024-07-10 14:39:07.087263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.853 [2024-07-10 14:39:07.087304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.853 qpair failed and we were unable to recover it. 00:36:57.853 [2024-07-10 14:39:07.097066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.853 [2024-07-10 14:39:07.097273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.853 [2024-07-10 14:39:07.097311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.853 [2024-07-10 14:39:07.097334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.853 [2024-07-10 14:39:07.097353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.854 [2024-07-10 14:39:07.097394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.854 qpair failed and we were unable to recover it. 00:36:57.854 [2024-07-10 14:39:07.107068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.854 [2024-07-10 14:39:07.107232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.854 [2024-07-10 14:39:07.107266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.854 [2024-07-10 14:39:07.107289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.854 [2024-07-10 14:39:07.107308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.854 [2024-07-10 14:39:07.107347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.854 qpair failed and we were unable to recover it. 00:36:57.854 [2024-07-10 14:39:07.117082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.854 [2024-07-10 14:39:07.117297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.854 [2024-07-10 14:39:07.117330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.854 [2024-07-10 14:39:07.117352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.854 [2024-07-10 14:39:07.117371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.854 [2024-07-10 14:39:07.117421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.854 qpair failed and we were unable to recover it. 00:36:57.854 [2024-07-10 14:39:07.127063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.854 [2024-07-10 14:39:07.127239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.854 [2024-07-10 14:39:07.127271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.854 [2024-07-10 14:39:07.127294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.854 [2024-07-10 14:39:07.127312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.854 [2024-07-10 14:39:07.127352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.854 qpair failed and we were unable to recover it. 00:36:57.854 [2024-07-10 14:39:07.137172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.854 [2024-07-10 14:39:07.137339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.854 [2024-07-10 14:39:07.137372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.854 [2024-07-10 14:39:07.137394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.854 [2024-07-10 14:39:07.137418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.854 [2024-07-10 14:39:07.137467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.854 qpair failed and we were unable to recover it. 00:36:57.854 [2024-07-10 14:39:07.147147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.854 [2024-07-10 14:39:07.147313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.854 [2024-07-10 14:39:07.147346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.854 [2024-07-10 14:39:07.147368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.854 [2024-07-10 14:39:07.147387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.854 [2024-07-10 14:39:07.147439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.854 qpair failed and we were unable to recover it. 00:36:57.854 [2024-07-10 14:39:07.157198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.854 [2024-07-10 14:39:07.157370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.854 [2024-07-10 14:39:07.157407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.854 [2024-07-10 14:39:07.157445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.854 [2024-07-10 14:39:07.157468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.854 [2024-07-10 14:39:07.157510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.854 qpair failed and we were unable to recover it. 00:36:57.854 [2024-07-10 14:39:07.167202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.854 [2024-07-10 14:39:07.167371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.854 [2024-07-10 14:39:07.167405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.854 [2024-07-10 14:39:07.167434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.854 [2024-07-10 14:39:07.167455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.854 [2024-07-10 14:39:07.167496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.854 qpair failed and we were unable to recover it. 00:36:57.854 [2024-07-10 14:39:07.177232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.854 [2024-07-10 14:39:07.177439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.854 [2024-07-10 14:39:07.177473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.854 [2024-07-10 14:39:07.177501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.854 [2024-07-10 14:39:07.177521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.854 [2024-07-10 14:39:07.177561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.854 qpair failed and we were unable to recover it. 00:36:57.854 [2024-07-10 14:39:07.187279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.854 [2024-07-10 14:39:07.187450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.854 [2024-07-10 14:39:07.187494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.854 [2024-07-10 14:39:07.187518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.854 [2024-07-10 14:39:07.187536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.854 [2024-07-10 14:39:07.187577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.854 qpair failed and we were unable to recover it. 00:36:57.854 [2024-07-10 14:39:07.197323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.854 [2024-07-10 14:39:07.197506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.854 [2024-07-10 14:39:07.197539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.854 [2024-07-10 14:39:07.197561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.854 [2024-07-10 14:39:07.197579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.854 [2024-07-10 14:39:07.197619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.854 qpair failed and we were unable to recover it. 00:36:57.854 [2024-07-10 14:39:07.207320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.854 [2024-07-10 14:39:07.207498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.854 [2024-07-10 14:39:07.207531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.854 [2024-07-10 14:39:07.207553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.854 [2024-07-10 14:39:07.207572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.854 [2024-07-10 14:39:07.207613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.854 qpair failed and we were unable to recover it. 00:36:57.854 [2024-07-10 14:39:07.217408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.854 [2024-07-10 14:39:07.217624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.854 [2024-07-10 14:39:07.217662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.854 [2024-07-10 14:39:07.217686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.855 [2024-07-10 14:39:07.217704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.855 [2024-07-10 14:39:07.217745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.855 qpair failed and we were unable to recover it. 00:36:57.855 [2024-07-10 14:39:07.227454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.855 [2024-07-10 14:39:07.227626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.855 [2024-07-10 14:39:07.227659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.855 [2024-07-10 14:39:07.227682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.855 [2024-07-10 14:39:07.227701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.855 [2024-07-10 14:39:07.227743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.855 qpair failed and we were unable to recover it. 00:36:57.855 [2024-07-10 14:39:07.237478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.855 [2024-07-10 14:39:07.237686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.855 [2024-07-10 14:39:07.237719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.855 [2024-07-10 14:39:07.237741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.855 [2024-07-10 14:39:07.237760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.855 [2024-07-10 14:39:07.237802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.855 qpair failed and we were unable to recover it. 00:36:57.855 [2024-07-10 14:39:07.247542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.855 [2024-07-10 14:39:07.247720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.855 [2024-07-10 14:39:07.247753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.855 [2024-07-10 14:39:07.247775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.855 [2024-07-10 14:39:07.247793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.855 [2024-07-10 14:39:07.247833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.855 qpair failed and we were unable to recover it. 00:36:57.855 [2024-07-10 14:39:07.257503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.855 [2024-07-10 14:39:07.257669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.855 [2024-07-10 14:39:07.257702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.855 [2024-07-10 14:39:07.257724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.855 [2024-07-10 14:39:07.257743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.855 [2024-07-10 14:39:07.257784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.855 qpair failed and we were unable to recover it. 00:36:57.855 [2024-07-10 14:39:07.267541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.855 [2024-07-10 14:39:07.267725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.855 [2024-07-10 14:39:07.267765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.855 [2024-07-10 14:39:07.267789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.855 [2024-07-10 14:39:07.267807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.855 [2024-07-10 14:39:07.267848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.855 qpair failed and we were unable to recover it. 00:36:57.855 [2024-07-10 14:39:07.277610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.855 [2024-07-10 14:39:07.277783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.855 [2024-07-10 14:39:07.277816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.855 [2024-07-10 14:39:07.277839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.855 [2024-07-10 14:39:07.277858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.855 [2024-07-10 14:39:07.277899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.855 qpair failed and we were unable to recover it. 00:36:57.855 [2024-07-10 14:39:07.287572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.855 [2024-07-10 14:39:07.287750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.855 [2024-07-10 14:39:07.287783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.855 [2024-07-10 14:39:07.287806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.855 [2024-07-10 14:39:07.287836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.855 [2024-07-10 14:39:07.287882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.855 qpair failed and we were unable to recover it. 00:36:57.855 [2024-07-10 14:39:07.297608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.855 [2024-07-10 14:39:07.297785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.855 [2024-07-10 14:39:07.297818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.855 [2024-07-10 14:39:07.297841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.855 [2024-07-10 14:39:07.297859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.855 [2024-07-10 14:39:07.297899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.855 qpair failed and we were unable to recover it. 00:36:57.855 [2024-07-10 14:39:07.307641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.855 [2024-07-10 14:39:07.307826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.855 [2024-07-10 14:39:07.307859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.855 [2024-07-10 14:39:07.307882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.855 [2024-07-10 14:39:07.307900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.855 [2024-07-10 14:39:07.307947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.855 qpair failed and we were unable to recover it. 00:36:57.855 [2024-07-10 14:39:07.317754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.855 [2024-07-10 14:39:07.317964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.855 [2024-07-10 14:39:07.317998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.855 [2024-07-10 14:39:07.318020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.855 [2024-07-10 14:39:07.318039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.855 [2024-07-10 14:39:07.318079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.855 qpair failed and we were unable to recover it. 00:36:57.855 [2024-07-10 14:39:07.327749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.855 [2024-07-10 14:39:07.327911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.855 [2024-07-10 14:39:07.327945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.855 [2024-07-10 14:39:07.327967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.855 [2024-07-10 14:39:07.327986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:57.855 [2024-07-10 14:39:07.328026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:57.855 qpair failed and we were unable to recover it. 00:36:58.115 [2024-07-10 14:39:07.337793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.115 [2024-07-10 14:39:07.337958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.115 [2024-07-10 14:39:07.337994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.115 [2024-07-10 14:39:07.338018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.115 [2024-07-10 14:39:07.338036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.115 [2024-07-10 14:39:07.338077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.115 qpair failed and we were unable to recover it. 00:36:58.115 [2024-07-10 14:39:07.347732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.115 [2024-07-10 14:39:07.347891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.115 [2024-07-10 14:39:07.347925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.115 [2024-07-10 14:39:07.347948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.115 [2024-07-10 14:39:07.347966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.115 [2024-07-10 14:39:07.348007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.115 qpair failed and we were unable to recover it. 00:36:58.115 [2024-07-10 14:39:07.357828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.115 [2024-07-10 14:39:07.358021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.115 [2024-07-10 14:39:07.358060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.115 [2024-07-10 14:39:07.358084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.115 [2024-07-10 14:39:07.358102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.115 [2024-07-10 14:39:07.358143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.115 qpair failed and we were unable to recover it. 00:36:58.115 [2024-07-10 14:39:07.367821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.115 [2024-07-10 14:39:07.368039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.115 [2024-07-10 14:39:07.368073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.115 [2024-07-10 14:39:07.368096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.115 [2024-07-10 14:39:07.368114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.115 [2024-07-10 14:39:07.368155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.115 qpair failed and we were unable to recover it. 00:36:58.115 [2024-07-10 14:39:07.378056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.115 [2024-07-10 14:39:07.378228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.115 [2024-07-10 14:39:07.378261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.115 [2024-07-10 14:39:07.378284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.115 [2024-07-10 14:39:07.378302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.115 [2024-07-10 14:39:07.378342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.115 qpair failed and we were unable to recover it. 00:36:58.115 [2024-07-10 14:39:07.387896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.115 [2024-07-10 14:39:07.388058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.115 [2024-07-10 14:39:07.388092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.115 [2024-07-10 14:39:07.388114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.115 [2024-07-10 14:39:07.388133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.115 [2024-07-10 14:39:07.388174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.115 qpair failed and we were unable to recover it. 00:36:58.115 [2024-07-10 14:39:07.397944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.115 [2024-07-10 14:39:07.398125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.115 [2024-07-10 14:39:07.398161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.115 [2024-07-10 14:39:07.398184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.115 [2024-07-10 14:39:07.398208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.115 [2024-07-10 14:39:07.398249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.115 qpair failed and we were unable to recover it. 00:36:58.115 [2024-07-10 14:39:07.407973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.115 [2024-07-10 14:39:07.408140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.115 [2024-07-10 14:39:07.408174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.115 [2024-07-10 14:39:07.408197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.115 [2024-07-10 14:39:07.408215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.115 [2024-07-10 14:39:07.408256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.116 qpair failed and we were unable to recover it. 00:36:58.116 [2024-07-10 14:39:07.417982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.116 [2024-07-10 14:39:07.418140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.116 [2024-07-10 14:39:07.418173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.116 [2024-07-10 14:39:07.418196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.116 [2024-07-10 14:39:07.418215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.116 [2024-07-10 14:39:07.418255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.116 qpair failed and we were unable to recover it. 00:36:58.116 [2024-07-10 14:39:07.428009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.116 [2024-07-10 14:39:07.428207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.116 [2024-07-10 14:39:07.428240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.116 [2024-07-10 14:39:07.428263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.116 [2024-07-10 14:39:07.428281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.116 [2024-07-10 14:39:07.428322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.116 qpair failed and we were unable to recover it. 00:36:58.116 [2024-07-10 14:39:07.438055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.116 [2024-07-10 14:39:07.438229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.116 [2024-07-10 14:39:07.438264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.116 [2024-07-10 14:39:07.438287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.116 [2024-07-10 14:39:07.438306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.116 [2024-07-10 14:39:07.438360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.116 qpair failed and we were unable to recover it. 00:36:58.116 [2024-07-10 14:39:07.448076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.116 [2024-07-10 14:39:07.448262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.116 [2024-07-10 14:39:07.448295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.116 [2024-07-10 14:39:07.448319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.116 [2024-07-10 14:39:07.448338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.116 [2024-07-10 14:39:07.448378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.116 qpair failed and we were unable to recover it. 00:36:58.116 [2024-07-10 14:39:07.458097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.116 [2024-07-10 14:39:07.458293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.116 [2024-07-10 14:39:07.458326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.116 [2024-07-10 14:39:07.458349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.116 [2024-07-10 14:39:07.458367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.116 [2024-07-10 14:39:07.458408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.116 qpair failed and we were unable to recover it. 00:36:58.116 [2024-07-10 14:39:07.468161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.116 [2024-07-10 14:39:07.468334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.116 [2024-07-10 14:39:07.468368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.116 [2024-07-10 14:39:07.468390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.116 [2024-07-10 14:39:07.468420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.116 [2024-07-10 14:39:07.468471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.116 qpair failed and we were unable to recover it. 00:36:58.116 [2024-07-10 14:39:07.478162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.116 [2024-07-10 14:39:07.478338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.116 [2024-07-10 14:39:07.478371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.116 [2024-07-10 14:39:07.478394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.116 [2024-07-10 14:39:07.478421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.116 [2024-07-10 14:39:07.478472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.116 qpair failed and we were unable to recover it. 00:36:58.116 [2024-07-10 14:39:07.488232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.116 [2024-07-10 14:39:07.488409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.116 [2024-07-10 14:39:07.488475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.116 [2024-07-10 14:39:07.488496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.116 [2024-07-10 14:39:07.488519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.116 [2024-07-10 14:39:07.488560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.116 qpair failed and we were unable to recover it. 00:36:58.116 [2024-07-10 14:39:07.498271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.116 [2024-07-10 14:39:07.498461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.116 [2024-07-10 14:39:07.498495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.116 [2024-07-10 14:39:07.498516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.116 [2024-07-10 14:39:07.498533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.116 [2024-07-10 14:39:07.498573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.116 qpair failed and we were unable to recover it. 00:36:58.116 [2024-07-10 14:39:07.508267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.117 [2024-07-10 14:39:07.508484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.117 [2024-07-10 14:39:07.508518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.117 [2024-07-10 14:39:07.508545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.117 [2024-07-10 14:39:07.508563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.117 [2024-07-10 14:39:07.508605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.117 qpair failed and we were unable to recover it. 00:36:58.117 [2024-07-10 14:39:07.518299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.117 [2024-07-10 14:39:07.518485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.117 [2024-07-10 14:39:07.518519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.117 [2024-07-10 14:39:07.518540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.117 [2024-07-10 14:39:07.518558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.117 [2024-07-10 14:39:07.518598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.117 qpair failed and we were unable to recover it. 00:36:58.117 [2024-07-10 14:39:07.528279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.117 [2024-07-10 14:39:07.528467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.117 [2024-07-10 14:39:07.528501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.117 [2024-07-10 14:39:07.528522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.117 [2024-07-10 14:39:07.528539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.117 [2024-07-10 14:39:07.528578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.117 qpair failed and we were unable to recover it. 00:36:58.117 [2024-07-10 14:39:07.538395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.117 [2024-07-10 14:39:07.538611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.117 [2024-07-10 14:39:07.538644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.117 [2024-07-10 14:39:07.538666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.117 [2024-07-10 14:39:07.538683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.117 [2024-07-10 14:39:07.538722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.117 qpair failed and we were unable to recover it. 00:36:58.117 [2024-07-10 14:39:07.548407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.117 [2024-07-10 14:39:07.548611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.117 [2024-07-10 14:39:07.548646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.117 [2024-07-10 14:39:07.548668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.117 [2024-07-10 14:39:07.548686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.117 [2024-07-10 14:39:07.548733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.117 qpair failed and we were unable to recover it. 00:36:58.117 [2024-07-10 14:39:07.558492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.117 [2024-07-10 14:39:07.558687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.117 [2024-07-10 14:39:07.558719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.117 [2024-07-10 14:39:07.558741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.117 [2024-07-10 14:39:07.558758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.117 [2024-07-10 14:39:07.558798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.117 qpair failed and we were unable to recover it. 00:36:58.117 [2024-07-10 14:39:07.568486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.117 [2024-07-10 14:39:07.568656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.117 [2024-07-10 14:39:07.568690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.117 [2024-07-10 14:39:07.568716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.117 [2024-07-10 14:39:07.568742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.117 [2024-07-10 14:39:07.568782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.117 qpair failed and we were unable to recover it. 00:36:58.117 [2024-07-10 14:39:07.578497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.117 [2024-07-10 14:39:07.578708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.117 [2024-07-10 14:39:07.578750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.117 [2024-07-10 14:39:07.578777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.117 [2024-07-10 14:39:07.578805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.117 [2024-07-10 14:39:07.578845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.117 qpair failed and we were unable to recover it. 00:36:58.117 [2024-07-10 14:39:07.588470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.117 [2024-07-10 14:39:07.588646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.117 [2024-07-10 14:39:07.588679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.117 [2024-07-10 14:39:07.588701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.117 [2024-07-10 14:39:07.588718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.117 [2024-07-10 14:39:07.588758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.117 qpair failed and we were unable to recover it. 00:36:58.377 [2024-07-10 14:39:07.598571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.377 [2024-07-10 14:39:07.598793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.377 [2024-07-10 14:39:07.598829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.377 [2024-07-10 14:39:07.598851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.377 [2024-07-10 14:39:07.598869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.377 [2024-07-10 14:39:07.598909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.377 qpair failed and we were unable to recover it. 00:36:58.377 [2024-07-10 14:39:07.608583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.377 [2024-07-10 14:39:07.608777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.377 [2024-07-10 14:39:07.608823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.377 [2024-07-10 14:39:07.608845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.377 [2024-07-10 14:39:07.608862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.377 [2024-07-10 14:39:07.608913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.377 qpair failed and we were unable to recover it. 00:36:58.377 [2024-07-10 14:39:07.618606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.377 [2024-07-10 14:39:07.618768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.377 [2024-07-10 14:39:07.618801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.377 [2024-07-10 14:39:07.618823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.377 [2024-07-10 14:39:07.618840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.377 [2024-07-10 14:39:07.618880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.377 qpair failed and we were unable to recover it. 00:36:58.377 [2024-07-10 14:39:07.628645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.377 [2024-07-10 14:39:07.628839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.377 [2024-07-10 14:39:07.628876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.377 [2024-07-10 14:39:07.628898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.377 [2024-07-10 14:39:07.628915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.377 [2024-07-10 14:39:07.628956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.377 qpair failed and we were unable to recover it. 00:36:58.377 [2024-07-10 14:39:07.638622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.377 [2024-07-10 14:39:07.638796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.377 [2024-07-10 14:39:07.638833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.377 [2024-07-10 14:39:07.638854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.377 [2024-07-10 14:39:07.638871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.377 [2024-07-10 14:39:07.638917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.377 qpair failed and we were unable to recover it. 00:36:58.377 [2024-07-10 14:39:07.648681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.377 [2024-07-10 14:39:07.648853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.377 [2024-07-10 14:39:07.648887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.377 [2024-07-10 14:39:07.648909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.377 [2024-07-10 14:39:07.648926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.377 [2024-07-10 14:39:07.648965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.377 qpair failed and we were unable to recover it. 00:36:58.377 [2024-07-10 14:39:07.658880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.377 [2024-07-10 14:39:07.659059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.377 [2024-07-10 14:39:07.659092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.377 [2024-07-10 14:39:07.659113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.377 [2024-07-10 14:39:07.659131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.377 [2024-07-10 14:39:07.659170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.377 qpair failed and we were unable to recover it. 00:36:58.377 [2024-07-10 14:39:07.668734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.377 [2024-07-10 14:39:07.668962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.377 [2024-07-10 14:39:07.669005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.377 [2024-07-10 14:39:07.669028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.377 [2024-07-10 14:39:07.669045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.377 [2024-07-10 14:39:07.669085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.377 qpair failed and we were unable to recover it. 00:36:58.377 [2024-07-10 14:39:07.678813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.377 [2024-07-10 14:39:07.679028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.377 [2024-07-10 14:39:07.679075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.377 [2024-07-10 14:39:07.679098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.377 [2024-07-10 14:39:07.679116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.377 [2024-07-10 14:39:07.679161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.377 qpair failed and we were unable to recover it. 00:36:58.377 [2024-07-10 14:39:07.688777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.377 [2024-07-10 14:39:07.688946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.377 [2024-07-10 14:39:07.688979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.377 [2024-07-10 14:39:07.689001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.377 [2024-07-10 14:39:07.689019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.377 [2024-07-10 14:39:07.689058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.377 qpair failed and we were unable to recover it. 00:36:58.377 [2024-07-10 14:39:07.698897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.377 [2024-07-10 14:39:07.699066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.377 [2024-07-10 14:39:07.699100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.377 [2024-07-10 14:39:07.699157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.377 [2024-07-10 14:39:07.699177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.377 [2024-07-10 14:39:07.699216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.377 qpair failed and we were unable to recover it. 00:36:58.377 [2024-07-10 14:39:07.708912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.377 [2024-07-10 14:39:07.709132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.377 [2024-07-10 14:39:07.709165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.377 [2024-07-10 14:39:07.709187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.377 [2024-07-10 14:39:07.709204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.378 [2024-07-10 14:39:07.709250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.378 qpair failed and we were unable to recover it. 00:36:58.378 [2024-07-10 14:39:07.718863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.378 [2024-07-10 14:39:07.719095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.378 [2024-07-10 14:39:07.719128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.378 [2024-07-10 14:39:07.719150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.378 [2024-07-10 14:39:07.719168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.378 [2024-07-10 14:39:07.719208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.378 qpair failed and we were unable to recover it. 00:36:58.378 [2024-07-10 14:39:07.728891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.378 [2024-07-10 14:39:07.729050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.378 [2024-07-10 14:39:07.729083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.378 [2024-07-10 14:39:07.729105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.378 [2024-07-10 14:39:07.729122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.378 [2024-07-10 14:39:07.729162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.378 qpair failed and we were unable to recover it. 00:36:58.378 [2024-07-10 14:39:07.738944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.378 [2024-07-10 14:39:07.739103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.378 [2024-07-10 14:39:07.739135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.378 [2024-07-10 14:39:07.739157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.378 [2024-07-10 14:39:07.739174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.378 [2024-07-10 14:39:07.739213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.378 qpair failed and we were unable to recover it. 00:36:58.378 [2024-07-10 14:39:07.748947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.378 [2024-07-10 14:39:07.749106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.378 [2024-07-10 14:39:07.749139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.378 [2024-07-10 14:39:07.749161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.378 [2024-07-10 14:39:07.749179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.378 [2024-07-10 14:39:07.749219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.378 qpair failed and we were unable to recover it. 00:36:58.378 [2024-07-10 14:39:07.759058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.378 [2024-07-10 14:39:07.759240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.378 [2024-07-10 14:39:07.759278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.378 [2024-07-10 14:39:07.759301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.378 [2024-07-10 14:39:07.759319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.378 [2024-07-10 14:39:07.759358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.378 qpair failed and we were unable to recover it. 00:36:58.378 [2024-07-10 14:39:07.768987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.378 [2024-07-10 14:39:07.769166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.378 [2024-07-10 14:39:07.769200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.378 [2024-07-10 14:39:07.769221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.378 [2024-07-10 14:39:07.769239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.378 [2024-07-10 14:39:07.769279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.378 qpair failed and we were unable to recover it. 00:36:58.378 [2024-07-10 14:39:07.779059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.378 [2024-07-10 14:39:07.779225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.378 [2024-07-10 14:39:07.779258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.378 [2024-07-10 14:39:07.779280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.378 [2024-07-10 14:39:07.779298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.378 [2024-07-10 14:39:07.779337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.378 qpair failed and we were unable to recover it. 00:36:58.378 [2024-07-10 14:39:07.789085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.378 [2024-07-10 14:39:07.789251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.378 [2024-07-10 14:39:07.789284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.378 [2024-07-10 14:39:07.789306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.378 [2024-07-10 14:39:07.789324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.378 [2024-07-10 14:39:07.789363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.378 qpair failed and we were unable to recover it. 00:36:58.378 [2024-07-10 14:39:07.799147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.378 [2024-07-10 14:39:07.799316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.378 [2024-07-10 14:39:07.799349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.378 [2024-07-10 14:39:07.799371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.378 [2024-07-10 14:39:07.799394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.378 [2024-07-10 14:39:07.799442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.378 qpair failed and we were unable to recover it. 00:36:58.378 [2024-07-10 14:39:07.809174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.378 [2024-07-10 14:39:07.809339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.378 [2024-07-10 14:39:07.809373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.378 [2024-07-10 14:39:07.809395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.378 [2024-07-10 14:39:07.809412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.378 [2024-07-10 14:39:07.809461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.378 qpair failed and we were unable to recover it. 00:36:58.378 [2024-07-10 14:39:07.819198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.378 [2024-07-10 14:39:07.819366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.378 [2024-07-10 14:39:07.819400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.378 [2024-07-10 14:39:07.819441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.378 [2024-07-10 14:39:07.819462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.378 [2024-07-10 14:39:07.819502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.378 qpair failed and we were unable to recover it. 00:36:58.378 [2024-07-10 14:39:07.829147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.378 [2024-07-10 14:39:07.829314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.378 [2024-07-10 14:39:07.829346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.378 [2024-07-10 14:39:07.829368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.378 [2024-07-10 14:39:07.829386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.378 [2024-07-10 14:39:07.829444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.378 qpair failed and we were unable to recover it. 00:36:58.378 [2024-07-10 14:39:07.839235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.378 [2024-07-10 14:39:07.839402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.378 [2024-07-10 14:39:07.839453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.378 [2024-07-10 14:39:07.839475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.378 [2024-07-10 14:39:07.839493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.378 [2024-07-10 14:39:07.839533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.378 qpair failed and we were unable to recover it. 00:36:58.378 [2024-07-10 14:39:07.849301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.378 [2024-07-10 14:39:07.849498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.378 [2024-07-10 14:39:07.849532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.378 [2024-07-10 14:39:07.849554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.378 [2024-07-10 14:39:07.849571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.378 [2024-07-10 14:39:07.849611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.378 qpair failed and we were unable to recover it. 00:36:58.638 [2024-07-10 14:39:07.859339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.638 [2024-07-10 14:39:07.859525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.638 [2024-07-10 14:39:07.859561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.638 [2024-07-10 14:39:07.859585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.638 [2024-07-10 14:39:07.859604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.638 [2024-07-10 14:39:07.859645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.638 qpair failed and we were unable to recover it. 00:36:58.638 [2024-07-10 14:39:07.869329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.638 [2024-07-10 14:39:07.869503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.638 [2024-07-10 14:39:07.869539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.638 [2024-07-10 14:39:07.869561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.638 [2024-07-10 14:39:07.869579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.638 [2024-07-10 14:39:07.869619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.638 qpair failed and we were unable to recover it. 00:36:58.638 [2024-07-10 14:39:07.879328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.638 [2024-07-10 14:39:07.879505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.638 [2024-07-10 14:39:07.879545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.638 [2024-07-10 14:39:07.879567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.638 [2024-07-10 14:39:07.879585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.638 [2024-07-10 14:39:07.879625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.638 qpair failed and we were unable to recover it. 00:36:58.638 [2024-07-10 14:39:07.889496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.638 [2024-07-10 14:39:07.889708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.638 [2024-07-10 14:39:07.889750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.638 [2024-07-10 14:39:07.889772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.638 [2024-07-10 14:39:07.889796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.638 [2024-07-10 14:39:07.889847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.638 qpair failed and we were unable to recover it. 00:36:58.638 [2024-07-10 14:39:07.899477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.638 [2024-07-10 14:39:07.899644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.638 [2024-07-10 14:39:07.899677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.638 [2024-07-10 14:39:07.899699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.638 [2024-07-10 14:39:07.899717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.638 [2024-07-10 14:39:07.899766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.638 qpair failed and we were unable to recover it. 00:36:58.638 [2024-07-10 14:39:07.909587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.638 [2024-07-10 14:39:07.909764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.638 [2024-07-10 14:39:07.909798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.638 [2024-07-10 14:39:07.909820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.638 [2024-07-10 14:39:07.909837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.638 [2024-07-10 14:39:07.909886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.638 qpair failed and we were unable to recover it. 00:36:58.638 [2024-07-10 14:39:07.919484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.638 [2024-07-10 14:39:07.919658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.638 [2024-07-10 14:39:07.919692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.638 [2024-07-10 14:39:07.919714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.638 [2024-07-10 14:39:07.919737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.638 [2024-07-10 14:39:07.919777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.638 qpair failed and we were unable to recover it. 00:36:58.638 [2024-07-10 14:39:07.929472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.638 [2024-07-10 14:39:07.929639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.638 [2024-07-10 14:39:07.929672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.638 [2024-07-10 14:39:07.929694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.638 [2024-07-10 14:39:07.929712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.638 [2024-07-10 14:39:07.929751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.638 qpair failed and we were unable to recover it. 00:36:58.638 [2024-07-10 14:39:07.939566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.638 [2024-07-10 14:39:07.939725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.638 [2024-07-10 14:39:07.939759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.638 [2024-07-10 14:39:07.939781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.638 [2024-07-10 14:39:07.939798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.638 [2024-07-10 14:39:07.939838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.638 qpair failed and we were unable to recover it. 00:36:58.638 [2024-07-10 14:39:07.949587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.638 [2024-07-10 14:39:07.949746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.638 [2024-07-10 14:39:07.949788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.638 [2024-07-10 14:39:07.949810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.638 [2024-07-10 14:39:07.949827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.639 [2024-07-10 14:39:07.949866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.639 qpair failed and we were unable to recover it. 00:36:58.639 [2024-07-10 14:39:07.959563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.639 [2024-07-10 14:39:07.959739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.639 [2024-07-10 14:39:07.959772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.639 [2024-07-10 14:39:07.959793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.639 [2024-07-10 14:39:07.959811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.639 [2024-07-10 14:39:07.959850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.639 qpair failed and we were unable to recover it. 00:36:58.639 [2024-07-10 14:39:07.969629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.639 [2024-07-10 14:39:07.969792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.639 [2024-07-10 14:39:07.969825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.639 [2024-07-10 14:39:07.969846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.639 [2024-07-10 14:39:07.969869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.639 [2024-07-10 14:39:07.969909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.639 qpair failed and we were unable to recover it. 00:36:58.639 [2024-07-10 14:39:07.979716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.639 [2024-07-10 14:39:07.979913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.639 [2024-07-10 14:39:07.979948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.639 [2024-07-10 14:39:07.979980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.639 [2024-07-10 14:39:07.980000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.639 [2024-07-10 14:39:07.980040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.639 qpair failed and we were unable to recover it. 00:36:58.639 [2024-07-10 14:39:07.989646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.639 [2024-07-10 14:39:07.989814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.639 [2024-07-10 14:39:07.989848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.639 [2024-07-10 14:39:07.989869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.639 [2024-07-10 14:39:07.989886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.639 [2024-07-10 14:39:07.989926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.639 qpair failed and we were unable to recover it. 00:36:58.639 [2024-07-10 14:39:07.999750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.639 [2024-07-10 14:39:07.999927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.639 [2024-07-10 14:39:07.999960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.639 [2024-07-10 14:39:07.999982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.639 [2024-07-10 14:39:08.000000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.639 [2024-07-10 14:39:08.000040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.639 qpair failed and we were unable to recover it. 00:36:58.639 [2024-07-10 14:39:08.009734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.639 [2024-07-10 14:39:08.009892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.639 [2024-07-10 14:39:08.009924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.639 [2024-07-10 14:39:08.009946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.639 [2024-07-10 14:39:08.009964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.639 [2024-07-10 14:39:08.010004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.639 qpair failed and we were unable to recover it. 00:36:58.639 [2024-07-10 14:39:08.019786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.639 [2024-07-10 14:39:08.019951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.639 [2024-07-10 14:39:08.019984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.639 [2024-07-10 14:39:08.020005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.639 [2024-07-10 14:39:08.020023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.639 [2024-07-10 14:39:08.020063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.639 qpair failed and we were unable to recover it. 00:36:58.639 [2024-07-10 14:39:08.029803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.639 [2024-07-10 14:39:08.029967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.639 [2024-07-10 14:39:08.030000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.639 [2024-07-10 14:39:08.030023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.639 [2024-07-10 14:39:08.030040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.639 [2024-07-10 14:39:08.030080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.639 qpair failed and we were unable to recover it. 00:36:58.639 [2024-07-10 14:39:08.039918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.639 [2024-07-10 14:39:08.040093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.639 [2024-07-10 14:39:08.040125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.639 [2024-07-10 14:39:08.040147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.639 [2024-07-10 14:39:08.040166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.639 [2024-07-10 14:39:08.040205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.639 qpair failed and we were unable to recover it. 00:36:58.639 [2024-07-10 14:39:08.050059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.639 [2024-07-10 14:39:08.050258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.639 [2024-07-10 14:39:08.050292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.639 [2024-07-10 14:39:08.050313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.639 [2024-07-10 14:39:08.050330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.639 [2024-07-10 14:39:08.050370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.639 qpair failed and we were unable to recover it. 00:36:58.639 [2024-07-10 14:39:08.059927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.639 [2024-07-10 14:39:08.060097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.639 [2024-07-10 14:39:08.060130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.639 [2024-07-10 14:39:08.060152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.639 [2024-07-10 14:39:08.060170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.639 [2024-07-10 14:39:08.060208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.639 qpair failed and we were unable to recover it. 00:36:58.639 [2024-07-10 14:39:08.069893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.639 [2024-07-10 14:39:08.070081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.639 [2024-07-10 14:39:08.070118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.639 [2024-07-10 14:39:08.070142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.639 [2024-07-10 14:39:08.070159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.640 [2024-07-10 14:39:08.070207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.640 qpair failed and we were unable to recover it. 00:36:58.640 [2024-07-10 14:39:08.080119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.640 [2024-07-10 14:39:08.080290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.640 [2024-07-10 14:39:08.080323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.640 [2024-07-10 14:39:08.080344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.640 [2024-07-10 14:39:08.080361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.640 [2024-07-10 14:39:08.080401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.640 qpair failed and we were unable to recover it. 00:36:58.640 [2024-07-10 14:39:08.089948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.640 [2024-07-10 14:39:08.090119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.640 [2024-07-10 14:39:08.090152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.640 [2024-07-10 14:39:08.090174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.640 [2024-07-10 14:39:08.090191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.640 [2024-07-10 14:39:08.090230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.640 qpair failed and we were unable to recover it. 00:36:58.640 [2024-07-10 14:39:08.100009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.640 [2024-07-10 14:39:08.100179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.640 [2024-07-10 14:39:08.100211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.640 [2024-07-10 14:39:08.100233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.640 [2024-07-10 14:39:08.100251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.640 [2024-07-10 14:39:08.100290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.640 qpair failed and we were unable to recover it. 00:36:58.640 [2024-07-10 14:39:08.110090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.640 [2024-07-10 14:39:08.110319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.640 [2024-07-10 14:39:08.110352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.640 [2024-07-10 14:39:08.110373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.640 [2024-07-10 14:39:08.110391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.640 [2024-07-10 14:39:08.110444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.640 qpair failed and we were unable to recover it. 00:36:58.899 [2024-07-10 14:39:08.120084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.899 [2024-07-10 14:39:08.120256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.899 [2024-07-10 14:39:08.120292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.899 [2024-07-10 14:39:08.120314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.899 [2024-07-10 14:39:08.120331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.899 [2024-07-10 14:39:08.120372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.899 qpair failed and we were unable to recover it. 00:36:58.899 [2024-07-10 14:39:08.130121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.899 [2024-07-10 14:39:08.130295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.899 [2024-07-10 14:39:08.130330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.899 [2024-07-10 14:39:08.130352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.899 [2024-07-10 14:39:08.130369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.899 [2024-07-10 14:39:08.130409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.899 qpair failed and we were unable to recover it. 00:36:58.899 [2024-07-10 14:39:08.140172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.899 [2024-07-10 14:39:08.140332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.899 [2024-07-10 14:39:08.140365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.899 [2024-07-10 14:39:08.140387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.899 [2024-07-10 14:39:08.140404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.899 [2024-07-10 14:39:08.140454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.899 qpair failed and we were unable to recover it. 00:36:58.899 [2024-07-10 14:39:08.150159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.899 [2024-07-10 14:39:08.150324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.899 [2024-07-10 14:39:08.150357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.899 [2024-07-10 14:39:08.150378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.899 [2024-07-10 14:39:08.150396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.899 [2024-07-10 14:39:08.150441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.899 qpair failed and we were unable to recover it. 00:36:58.899 [2024-07-10 14:39:08.160181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.899 [2024-07-10 14:39:08.160366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.899 [2024-07-10 14:39:08.160405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.899 [2024-07-10 14:39:08.160435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.899 [2024-07-10 14:39:08.160455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.899 [2024-07-10 14:39:08.160502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.899 qpair failed and we were unable to recover it. 00:36:58.899 [2024-07-10 14:39:08.170155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.899 [2024-07-10 14:39:08.170319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.899 [2024-07-10 14:39:08.170353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.899 [2024-07-10 14:39:08.170375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.899 [2024-07-10 14:39:08.170392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.899 [2024-07-10 14:39:08.170439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.899 qpair failed and we were unable to recover it. 00:36:58.899 [2024-07-10 14:39:08.180286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.899 [2024-07-10 14:39:08.180505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.899 [2024-07-10 14:39:08.180538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.899 [2024-07-10 14:39:08.180560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.899 [2024-07-10 14:39:08.180578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.899 [2024-07-10 14:39:08.180618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.899 qpair failed and we were unable to recover it. 00:36:58.899 [2024-07-10 14:39:08.190284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.899 [2024-07-10 14:39:08.190468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.899 [2024-07-10 14:39:08.190502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.899 [2024-07-10 14:39:08.190524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.899 [2024-07-10 14:39:08.190541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.899 [2024-07-10 14:39:08.190580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.899 qpair failed and we were unable to recover it. 00:36:58.899 [2024-07-10 14:39:08.200344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.899 [2024-07-10 14:39:08.200564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.899 [2024-07-10 14:39:08.200597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.899 [2024-07-10 14:39:08.200619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.899 [2024-07-10 14:39:08.200637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.900 [2024-07-10 14:39:08.200682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.900 qpair failed and we were unable to recover it. 00:36:58.900 [2024-07-10 14:39:08.210377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.900 [2024-07-10 14:39:08.210547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.900 [2024-07-10 14:39:08.210581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.900 [2024-07-10 14:39:08.210602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.900 [2024-07-10 14:39:08.210633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.900 [2024-07-10 14:39:08.210675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.900 qpair failed and we were unable to recover it. 00:36:58.900 [2024-07-10 14:39:08.220406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.900 [2024-07-10 14:39:08.220590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.900 [2024-07-10 14:39:08.220623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.900 [2024-07-10 14:39:08.220645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.900 [2024-07-10 14:39:08.220662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.900 [2024-07-10 14:39:08.220702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.900 qpair failed and we were unable to recover it. 00:36:58.900 [2024-07-10 14:39:08.230439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.900 [2024-07-10 14:39:08.230652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.900 [2024-07-10 14:39:08.230686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.900 [2024-07-10 14:39:08.230707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.900 [2024-07-10 14:39:08.230724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.900 [2024-07-10 14:39:08.230764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.900 qpair failed and we were unable to recover it. 00:36:58.900 [2024-07-10 14:39:08.240414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.900 [2024-07-10 14:39:08.240604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.900 [2024-07-10 14:39:08.240637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.900 [2024-07-10 14:39:08.240659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.900 [2024-07-10 14:39:08.240677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.900 [2024-07-10 14:39:08.240717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.900 qpair failed and we were unable to recover it. 00:36:58.900 [2024-07-10 14:39:08.250400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.900 [2024-07-10 14:39:08.250578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.900 [2024-07-10 14:39:08.250612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.900 [2024-07-10 14:39:08.250633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.900 [2024-07-10 14:39:08.250650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.900 [2024-07-10 14:39:08.250691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.900 qpair failed and we were unable to recover it. 00:36:58.900 [2024-07-10 14:39:08.260495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.900 [2024-07-10 14:39:08.260671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.900 [2024-07-10 14:39:08.260705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.900 [2024-07-10 14:39:08.260727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.900 [2024-07-10 14:39:08.260744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.900 [2024-07-10 14:39:08.260783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.900 qpair failed and we were unable to recover it. 00:36:58.900 [2024-07-10 14:39:08.270511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.900 [2024-07-10 14:39:08.270675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.900 [2024-07-10 14:39:08.270714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.900 [2024-07-10 14:39:08.270737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.900 [2024-07-10 14:39:08.270755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.900 [2024-07-10 14:39:08.270794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.900 qpair failed and we were unable to recover it. 00:36:58.900 [2024-07-10 14:39:08.280500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.900 [2024-07-10 14:39:08.280673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.900 [2024-07-10 14:39:08.280705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.900 [2024-07-10 14:39:08.280727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.900 [2024-07-10 14:39:08.280744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.900 [2024-07-10 14:39:08.280784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.900 qpair failed and we were unable to recover it. 00:36:58.900 [2024-07-10 14:39:08.290567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.900 [2024-07-10 14:39:08.290741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.900 [2024-07-10 14:39:08.290778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.900 [2024-07-10 14:39:08.290799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.900 [2024-07-10 14:39:08.290822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.900 [2024-07-10 14:39:08.290862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.900 qpair failed and we were unable to recover it. 00:36:58.900 [2024-07-10 14:39:08.300651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.900 [2024-07-10 14:39:08.300822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.900 [2024-07-10 14:39:08.300855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.900 [2024-07-10 14:39:08.300876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.900 [2024-07-10 14:39:08.300894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.900 [2024-07-10 14:39:08.300933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.900 qpair failed and we were unable to recover it. 00:36:58.900 [2024-07-10 14:39:08.310582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.900 [2024-07-10 14:39:08.310749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.900 [2024-07-10 14:39:08.310782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.900 [2024-07-10 14:39:08.310804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.900 [2024-07-10 14:39:08.310822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.900 [2024-07-10 14:39:08.310860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.900 qpair failed and we were unable to recover it. 00:36:58.900 [2024-07-10 14:39:08.320661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.900 [2024-07-10 14:39:08.320903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.900 [2024-07-10 14:39:08.320939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.900 [2024-07-10 14:39:08.320964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.900 [2024-07-10 14:39:08.320982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.900 [2024-07-10 14:39:08.321022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.900 qpair failed and we were unable to recover it. 00:36:58.900 [2024-07-10 14:39:08.330733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.900 [2024-07-10 14:39:08.330945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.900 [2024-07-10 14:39:08.330978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.900 [2024-07-10 14:39:08.331000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.900 [2024-07-10 14:39:08.331018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.900 [2024-07-10 14:39:08.331057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.900 qpair failed and we were unable to recover it. 00:36:58.900 [2024-07-10 14:39:08.340730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.900 [2024-07-10 14:39:08.340894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.900 [2024-07-10 14:39:08.340927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.900 [2024-07-10 14:39:08.340949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.900 [2024-07-10 14:39:08.340967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.900 [2024-07-10 14:39:08.341006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.900 qpair failed and we were unable to recover it. 00:36:58.900 [2024-07-10 14:39:08.350721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.901 [2024-07-10 14:39:08.350887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.901 [2024-07-10 14:39:08.350921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.901 [2024-07-10 14:39:08.350943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.901 [2024-07-10 14:39:08.350960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.901 [2024-07-10 14:39:08.351000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.901 qpair failed and we were unable to recover it. 00:36:58.901 [2024-07-10 14:39:08.360723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.901 [2024-07-10 14:39:08.360890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.901 [2024-07-10 14:39:08.360923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.901 [2024-07-10 14:39:08.360944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.901 [2024-07-10 14:39:08.360962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.901 [2024-07-10 14:39:08.361001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.901 qpair failed and we were unable to recover it. 00:36:58.901 [2024-07-10 14:39:08.370779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.901 [2024-07-10 14:39:08.370950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.901 [2024-07-10 14:39:08.370983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.901 [2024-07-10 14:39:08.371007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.901 [2024-07-10 14:39:08.371024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:58.901 [2024-07-10 14:39:08.371064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.901 qpair failed and we were unable to recover it. 00:36:59.160 [2024-07-10 14:39:08.380795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.160 [2024-07-10 14:39:08.380955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.160 [2024-07-10 14:39:08.380990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.160 [2024-07-10 14:39:08.381018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.160 [2024-07-10 14:39:08.381037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.160 [2024-07-10 14:39:08.381078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.160 qpair failed and we were unable to recover it. 00:36:59.160 [2024-07-10 14:39:08.390797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.160 [2024-07-10 14:39:08.390964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.160 [2024-07-10 14:39:08.390998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.160 [2024-07-10 14:39:08.391021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.160 [2024-07-10 14:39:08.391039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.160 [2024-07-10 14:39:08.391080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.160 qpair failed and we were unable to recover it. 00:36:59.160 [2024-07-10 14:39:08.400886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.160 [2024-07-10 14:39:08.401064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.160 [2024-07-10 14:39:08.401097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.161 [2024-07-10 14:39:08.401123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.161 [2024-07-10 14:39:08.401141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.161 [2024-07-10 14:39:08.401182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-10 14:39:08.410860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.161 [2024-07-10 14:39:08.411031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.161 [2024-07-10 14:39:08.411064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.161 [2024-07-10 14:39:08.411085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.161 [2024-07-10 14:39:08.411102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.161 [2024-07-10 14:39:08.411153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-10 14:39:08.420938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.161 [2024-07-10 14:39:08.421107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.161 [2024-07-10 14:39:08.421140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.161 [2024-07-10 14:39:08.421162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.161 [2024-07-10 14:39:08.421180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.161 [2024-07-10 14:39:08.421220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-10 14:39:08.430935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.161 [2024-07-10 14:39:08.431104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.161 [2024-07-10 14:39:08.431138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.161 [2024-07-10 14:39:08.431160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.161 [2024-07-10 14:39:08.431177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.161 [2024-07-10 14:39:08.431217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-10 14:39:08.440954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.161 [2024-07-10 14:39:08.441133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.161 [2024-07-10 14:39:08.441167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.161 [2024-07-10 14:39:08.441189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.161 [2024-07-10 14:39:08.441206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.161 [2024-07-10 14:39:08.441246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-10 14:39:08.450991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.161 [2024-07-10 14:39:08.451169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.161 [2024-07-10 14:39:08.451201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.161 [2024-07-10 14:39:08.451223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.161 [2024-07-10 14:39:08.451241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.161 [2024-07-10 14:39:08.451281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-10 14:39:08.461126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.161 [2024-07-10 14:39:08.461284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.161 [2024-07-10 14:39:08.461317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.161 [2024-07-10 14:39:08.461339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.161 [2024-07-10 14:39:08.461357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.161 [2024-07-10 14:39:08.461402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-10 14:39:08.471032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.161 [2024-07-10 14:39:08.471191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.161 [2024-07-10 14:39:08.471242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.161 [2024-07-10 14:39:08.471265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.161 [2024-07-10 14:39:08.471283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.161 [2024-07-10 14:39:08.471323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-10 14:39:08.481110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.161 [2024-07-10 14:39:08.481286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.161 [2024-07-10 14:39:08.481319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.161 [2024-07-10 14:39:08.481341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.161 [2024-07-10 14:39:08.481358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.161 [2024-07-10 14:39:08.481397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-10 14:39:08.491140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.161 [2024-07-10 14:39:08.491303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.161 [2024-07-10 14:39:08.491336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.161 [2024-07-10 14:39:08.491359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.161 [2024-07-10 14:39:08.491376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.161 [2024-07-10 14:39:08.491415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-10 14:39:08.501150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.161 [2024-07-10 14:39:08.501312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.161 [2024-07-10 14:39:08.501345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.161 [2024-07-10 14:39:08.501367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.161 [2024-07-10 14:39:08.501384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.161 [2024-07-10 14:39:08.501430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-10 14:39:08.511183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.161 [2024-07-10 14:39:08.511349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.161 [2024-07-10 14:39:08.511382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.161 [2024-07-10 14:39:08.511403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.161 [2024-07-10 14:39:08.511421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.161 [2024-07-10 14:39:08.511470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-10 14:39:08.521178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.161 [2024-07-10 14:39:08.521352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.161 [2024-07-10 14:39:08.521385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.161 [2024-07-10 14:39:08.521407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.161 [2024-07-10 14:39:08.521431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.161 [2024-07-10 14:39:08.521474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-10 14:39:08.531340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.161 [2024-07-10 14:39:08.531536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.161 [2024-07-10 14:39:08.531570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.161 [2024-07-10 14:39:08.531592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.161 [2024-07-10 14:39:08.531609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.161 [2024-07-10 14:39:08.531649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-10 14:39:08.541322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.161 [2024-07-10 14:39:08.541492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.161 [2024-07-10 14:39:08.541526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.161 [2024-07-10 14:39:08.541548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.161 [2024-07-10 14:39:08.541566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.162 [2024-07-10 14:39:08.541605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-10 14:39:08.551286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.162 [2024-07-10 14:39:08.551449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.162 [2024-07-10 14:39:08.551482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.162 [2024-07-10 14:39:08.551504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.162 [2024-07-10 14:39:08.551521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.162 [2024-07-10 14:39:08.551560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-10 14:39:08.561376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.162 [2024-07-10 14:39:08.561566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.162 [2024-07-10 14:39:08.561604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.162 [2024-07-10 14:39:08.561627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.162 [2024-07-10 14:39:08.561644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.162 [2024-07-10 14:39:08.561684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-10 14:39:08.571385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.162 [2024-07-10 14:39:08.571557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.162 [2024-07-10 14:39:08.571590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.162 [2024-07-10 14:39:08.571612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.162 [2024-07-10 14:39:08.571630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.162 [2024-07-10 14:39:08.571670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-10 14:39:08.581608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.162 [2024-07-10 14:39:08.581782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.162 [2024-07-10 14:39:08.581816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.162 [2024-07-10 14:39:08.581842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.162 [2024-07-10 14:39:08.581860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.162 [2024-07-10 14:39:08.581899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-10 14:39:08.591468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.162 [2024-07-10 14:39:08.591627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.162 [2024-07-10 14:39:08.591661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.162 [2024-07-10 14:39:08.591683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.162 [2024-07-10 14:39:08.591701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.162 [2024-07-10 14:39:08.591740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-10 14:39:08.601457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.162 [2024-07-10 14:39:08.601627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.162 [2024-07-10 14:39:08.601660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.162 [2024-07-10 14:39:08.601683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.162 [2024-07-10 14:39:08.601700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.162 [2024-07-10 14:39:08.601746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-10 14:39:08.611503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.162 [2024-07-10 14:39:08.611667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.162 [2024-07-10 14:39:08.611700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.162 [2024-07-10 14:39:08.611722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.162 [2024-07-10 14:39:08.611739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.162 [2024-07-10 14:39:08.611780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-10 14:39:08.621553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.162 [2024-07-10 14:39:08.621761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.162 [2024-07-10 14:39:08.621794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.162 [2024-07-10 14:39:08.621816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.162 [2024-07-10 14:39:08.621834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.162 [2024-07-10 14:39:08.621874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-10 14:39:08.631566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.162 [2024-07-10 14:39:08.631732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.162 [2024-07-10 14:39:08.631766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.162 [2024-07-10 14:39:08.631789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.162 [2024-07-10 14:39:08.631807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.162 [2024-07-10 14:39:08.631847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.422 [2024-07-10 14:39:08.641619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.422 [2024-07-10 14:39:08.641861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.422 [2024-07-10 14:39:08.641897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.422 [2024-07-10 14:39:08.641925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.422 [2024-07-10 14:39:08.641944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.422 [2024-07-10 14:39:08.641985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.422 qpair failed and we were unable to recover it. 00:36:59.422 [2024-07-10 14:39:08.651546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.422 [2024-07-10 14:39:08.651711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.422 [2024-07-10 14:39:08.651751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.422 [2024-07-10 14:39:08.651775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.422 [2024-07-10 14:39:08.651793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.422 [2024-07-10 14:39:08.651834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.422 qpair failed and we were unable to recover it. 00:36:59.422 [2024-07-10 14:39:08.661681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.422 [2024-07-10 14:39:08.661863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.422 [2024-07-10 14:39:08.661903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.422 [2024-07-10 14:39:08.661925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.422 [2024-07-10 14:39:08.661943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.422 [2024-07-10 14:39:08.661982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.422 qpair failed and we were unable to recover it. 00:36:59.422 [2024-07-10 14:39:08.671644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.422 [2024-07-10 14:39:08.671802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.422 [2024-07-10 14:39:08.671835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.422 [2024-07-10 14:39:08.671858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.422 [2024-07-10 14:39:08.671879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.422 [2024-07-10 14:39:08.671918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.422 qpair failed and we were unable to recover it. 00:36:59.422 [2024-07-10 14:39:08.681654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.422 [2024-07-10 14:39:08.681833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.422 [2024-07-10 14:39:08.681866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.422 [2024-07-10 14:39:08.681888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.422 [2024-07-10 14:39:08.681905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.422 [2024-07-10 14:39:08.681944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.422 qpair failed and we were unable to recover it. 00:36:59.422 [2024-07-10 14:39:08.691705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.422 [2024-07-10 14:39:08.691889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.422 [2024-07-10 14:39:08.691923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.422 [2024-07-10 14:39:08.691945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.422 [2024-07-10 14:39:08.691968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.422 [2024-07-10 14:39:08.692009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.422 qpair failed and we were unable to recover it. 00:36:59.422 [2024-07-10 14:39:08.701755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.422 [2024-07-10 14:39:08.701933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.422 [2024-07-10 14:39:08.701966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.422 [2024-07-10 14:39:08.701988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.422 [2024-07-10 14:39:08.702006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.422 [2024-07-10 14:39:08.702045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.422 qpair failed and we were unable to recover it. 00:36:59.422 [2024-07-10 14:39:08.711780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.422 [2024-07-10 14:39:08.711974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.422 [2024-07-10 14:39:08.712009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.422 [2024-07-10 14:39:08.712036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.422 [2024-07-10 14:39:08.712055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.422 [2024-07-10 14:39:08.712095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.422 qpair failed and we were unable to recover it. 00:36:59.422 [2024-07-10 14:39:08.721796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.422 [2024-07-10 14:39:08.721969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.422 [2024-07-10 14:39:08.722002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.422 [2024-07-10 14:39:08.722024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.422 [2024-07-10 14:39:08.722042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.422 [2024-07-10 14:39:08.722095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.422 qpair failed and we were unable to recover it. 00:36:59.422 [2024-07-10 14:39:08.731843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.422 [2024-07-10 14:39:08.732017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.422 [2024-07-10 14:39:08.732051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.422 [2024-07-10 14:39:08.732073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.422 [2024-07-10 14:39:08.732090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.422 [2024-07-10 14:39:08.732129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.422 qpair failed and we were unable to recover it. 00:36:59.422 [2024-07-10 14:39:08.741863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.422 [2024-07-10 14:39:08.742036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.422 [2024-07-10 14:39:08.742069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.422 [2024-07-10 14:39:08.742090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.422 [2024-07-10 14:39:08.742107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.422 [2024-07-10 14:39:08.742147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.422 qpair failed and we were unable to recover it. 00:36:59.422 [2024-07-10 14:39:08.751861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.422 [2024-07-10 14:39:08.752024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.422 [2024-07-10 14:39:08.752057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.422 [2024-07-10 14:39:08.752079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.422 [2024-07-10 14:39:08.752097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.422 [2024-07-10 14:39:08.752137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.422 qpair failed and we were unable to recover it. 00:36:59.422 [2024-07-10 14:39:08.761892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.423 [2024-07-10 14:39:08.762064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.423 [2024-07-10 14:39:08.762097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.423 [2024-07-10 14:39:08.762119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.423 [2024-07-10 14:39:08.762136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.423 [2024-07-10 14:39:08.762176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.423 qpair failed and we were unable to recover it. 00:36:59.423 [2024-07-10 14:39:08.772018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.423 [2024-07-10 14:39:08.772218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.423 [2024-07-10 14:39:08.772252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.423 [2024-07-10 14:39:08.772274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.423 [2024-07-10 14:39:08.772292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.423 [2024-07-10 14:39:08.772332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.423 qpair failed and we were unable to recover it. 00:36:59.423 [2024-07-10 14:39:08.782043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.423 [2024-07-10 14:39:08.782222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.423 [2024-07-10 14:39:08.782259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.423 [2024-07-10 14:39:08.782289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.423 [2024-07-10 14:39:08.782307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.423 [2024-07-10 14:39:08.782347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.423 qpair failed and we were unable to recover it. 00:36:59.423 [2024-07-10 14:39:08.792009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.423 [2024-07-10 14:39:08.792173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.423 [2024-07-10 14:39:08.792207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.423 [2024-07-10 14:39:08.792228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.423 [2024-07-10 14:39:08.792246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.423 [2024-07-10 14:39:08.792285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.423 qpair failed and we were unable to recover it. 00:36:59.423 [2024-07-10 14:39:08.802021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.423 [2024-07-10 14:39:08.802194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.423 [2024-07-10 14:39:08.802228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.423 [2024-07-10 14:39:08.802250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.423 [2024-07-10 14:39:08.802267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.423 [2024-07-10 14:39:08.802306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.423 qpair failed and we were unable to recover it. 00:36:59.423 [2024-07-10 14:39:08.812034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.423 [2024-07-10 14:39:08.812198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.423 [2024-07-10 14:39:08.812231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.423 [2024-07-10 14:39:08.812253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.423 [2024-07-10 14:39:08.812270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.423 [2024-07-10 14:39:08.812310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.423 qpair failed and we were unable to recover it. 00:36:59.423 [2024-07-10 14:39:08.822137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.423 [2024-07-10 14:39:08.822337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.423 [2024-07-10 14:39:08.822371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.423 [2024-07-10 14:39:08.822393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.423 [2024-07-10 14:39:08.822411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.423 [2024-07-10 14:39:08.822469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.423 qpair failed and we were unable to recover it. 00:36:59.423 [2024-07-10 14:39:08.832103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.423 [2024-07-10 14:39:08.832266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.423 [2024-07-10 14:39:08.832299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.423 [2024-07-10 14:39:08.832320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.423 [2024-07-10 14:39:08.832338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.423 [2024-07-10 14:39:08.832378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.423 qpair failed and we were unable to recover it. 00:36:59.423 [2024-07-10 14:39:08.842192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.423 [2024-07-10 14:39:08.842362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.423 [2024-07-10 14:39:08.842396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.423 [2024-07-10 14:39:08.842418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.423 [2024-07-10 14:39:08.842445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.423 [2024-07-10 14:39:08.842486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.423 qpair failed and we were unable to recover it. 00:36:59.423 [2024-07-10 14:39:08.852168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.423 [2024-07-10 14:39:08.852388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.423 [2024-07-10 14:39:08.852421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.423 [2024-07-10 14:39:08.852452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.423 [2024-07-10 14:39:08.852471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.423 [2024-07-10 14:39:08.852517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.423 qpair failed and we were unable to recover it. 00:36:59.423 [2024-07-10 14:39:08.862203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.423 [2024-07-10 14:39:08.862379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.423 [2024-07-10 14:39:08.862413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.423 [2024-07-10 14:39:08.862443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.423 [2024-07-10 14:39:08.862463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.423 [2024-07-10 14:39:08.862503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.424 qpair failed and we were unable to recover it. 00:36:59.424 [2024-07-10 14:39:08.872205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.424 [2024-07-10 14:39:08.872370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.424 [2024-07-10 14:39:08.872403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.424 [2024-07-10 14:39:08.872436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.424 [2024-07-10 14:39:08.872457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.424 [2024-07-10 14:39:08.872497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.424 qpair failed and we were unable to recover it. 00:36:59.424 [2024-07-10 14:39:08.882252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.424 [2024-07-10 14:39:08.882433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.424 [2024-07-10 14:39:08.882476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.424 [2024-07-10 14:39:08.882497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.424 [2024-07-10 14:39:08.882515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.424 [2024-07-10 14:39:08.882555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.424 qpair failed and we were unable to recover it. 00:36:59.424 [2024-07-10 14:39:08.892234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.424 [2024-07-10 14:39:08.892402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.424 [2024-07-10 14:39:08.892442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.424 [2024-07-10 14:39:08.892466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.424 [2024-07-10 14:39:08.892484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.424 [2024-07-10 14:39:08.892523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.424 qpair failed and we were unable to recover it. 00:36:59.684 [2024-07-10 14:39:08.902321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.684 [2024-07-10 14:39:08.902494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.684 [2024-07-10 14:39:08.902531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.684 [2024-07-10 14:39:08.902554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.684 [2024-07-10 14:39:08.902572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.684 [2024-07-10 14:39:08.902612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.684 qpair failed and we were unable to recover it. 00:36:59.684 [2024-07-10 14:39:08.912376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.684 [2024-07-10 14:39:08.912575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.684 [2024-07-10 14:39:08.912610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.684 [2024-07-10 14:39:08.912632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.684 [2024-07-10 14:39:08.912650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.684 [2024-07-10 14:39:08.912691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.684 qpair failed and we were unable to recover it. 00:36:59.684 [2024-07-10 14:39:08.922401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.684 [2024-07-10 14:39:08.922603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.684 [2024-07-10 14:39:08.922638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.684 [2024-07-10 14:39:08.922660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.684 [2024-07-10 14:39:08.922677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.684 [2024-07-10 14:39:08.922717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.684 qpair failed and we were unable to recover it. 00:36:59.684 [2024-07-10 14:39:08.932432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.684 [2024-07-10 14:39:08.932616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.684 [2024-07-10 14:39:08.932649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.684 [2024-07-10 14:39:08.932672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.684 [2024-07-10 14:39:08.932689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.684 [2024-07-10 14:39:08.932729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.684 qpair failed and we were unable to recover it. 00:36:59.684 [2024-07-10 14:39:08.942422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.684 [2024-07-10 14:39:08.942614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.684 [2024-07-10 14:39:08.942648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.684 [2024-07-10 14:39:08.942670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.684 [2024-07-10 14:39:08.942687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.684 [2024-07-10 14:39:08.942734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.684 qpair failed and we were unable to recover it. 00:36:59.684 [2024-07-10 14:39:08.952527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.684 [2024-07-10 14:39:08.952688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.685 [2024-07-10 14:39:08.952723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.685 [2024-07-10 14:39:08.952745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.685 [2024-07-10 14:39:08.952762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.685 [2024-07-10 14:39:08.952802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.685 qpair failed and we were unable to recover it. 00:36:59.685 [2024-07-10 14:39:08.962520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.685 [2024-07-10 14:39:08.962698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.685 [2024-07-10 14:39:08.962736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.685 [2024-07-10 14:39:08.962760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.685 [2024-07-10 14:39:08.962779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.685 [2024-07-10 14:39:08.962819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.685 qpair failed and we were unable to recover it. 00:36:59.685 [2024-07-10 14:39:08.972554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.685 [2024-07-10 14:39:08.972723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.685 [2024-07-10 14:39:08.972756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.685 [2024-07-10 14:39:08.972779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.685 [2024-07-10 14:39:08.972797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.685 [2024-07-10 14:39:08.972837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.685 qpair failed and we were unable to recover it. 00:36:59.685 [2024-07-10 14:39:08.982557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.685 [2024-07-10 14:39:08.982719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.685 [2024-07-10 14:39:08.982753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.685 [2024-07-10 14:39:08.982789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.685 [2024-07-10 14:39:08.982807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.685 [2024-07-10 14:39:08.982847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.685 qpair failed and we were unable to recover it. 00:36:59.685 [2024-07-10 14:39:08.992550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.685 [2024-07-10 14:39:08.992709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.685 [2024-07-10 14:39:08.992742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.685 [2024-07-10 14:39:08.992764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.685 [2024-07-10 14:39:08.992782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.685 [2024-07-10 14:39:08.992822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.685 qpair failed and we were unable to recover it. 00:36:59.685 [2024-07-10 14:39:09.002573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.685 [2024-07-10 14:39:09.002748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.685 [2024-07-10 14:39:09.002782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.685 [2024-07-10 14:39:09.002805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.685 [2024-07-10 14:39:09.002823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.685 [2024-07-10 14:39:09.002869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.685 qpair failed and we were unable to recover it. 00:36:59.685 [2024-07-10 14:39:09.012629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.685 [2024-07-10 14:39:09.012797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.685 [2024-07-10 14:39:09.012831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.685 [2024-07-10 14:39:09.012857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.685 [2024-07-10 14:39:09.012875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.685 [2024-07-10 14:39:09.012914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.685 qpair failed and we were unable to recover it. 00:36:59.685 [2024-07-10 14:39:09.022663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.685 [2024-07-10 14:39:09.022843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.685 [2024-07-10 14:39:09.022876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.685 [2024-07-10 14:39:09.022898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.685 [2024-07-10 14:39:09.022915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.685 [2024-07-10 14:39:09.022955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.685 qpair failed and we were unable to recover it. 00:36:59.685 [2024-07-10 14:39:09.032637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.685 [2024-07-10 14:39:09.032813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.685 [2024-07-10 14:39:09.032846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.685 [2024-07-10 14:39:09.032868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.685 [2024-07-10 14:39:09.032885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.685 [2024-07-10 14:39:09.032925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.685 qpair failed and we were unable to recover it. 00:36:59.685 [2024-07-10 14:39:09.042724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.685 [2024-07-10 14:39:09.042895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.685 [2024-07-10 14:39:09.042928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.685 [2024-07-10 14:39:09.042950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.685 [2024-07-10 14:39:09.042967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.685 [2024-07-10 14:39:09.043006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.685 qpair failed and we were unable to recover it. 00:36:59.685 [2024-07-10 14:39:09.052710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.685 [2024-07-10 14:39:09.052870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.685 [2024-07-10 14:39:09.052932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.685 [2024-07-10 14:39:09.052955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.685 [2024-07-10 14:39:09.052973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.685 [2024-07-10 14:39:09.053012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.685 qpair failed and we were unable to recover it. 00:36:59.685 [2024-07-10 14:39:09.062798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.685 [2024-07-10 14:39:09.062968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.685 [2024-07-10 14:39:09.063002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.685 [2024-07-10 14:39:09.063024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.685 [2024-07-10 14:39:09.063042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.685 [2024-07-10 14:39:09.063087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.685 qpair failed and we were unable to recover it. 00:36:59.685 [2024-07-10 14:39:09.072793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.685 [2024-07-10 14:39:09.072986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.685 [2024-07-10 14:39:09.073018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.685 [2024-07-10 14:39:09.073039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.685 [2024-07-10 14:39:09.073057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.685 [2024-07-10 14:39:09.073096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.685 qpair failed and we were unable to recover it. 00:36:59.685 [2024-07-10 14:39:09.082843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.685 [2024-07-10 14:39:09.083044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.685 [2024-07-10 14:39:09.083080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.685 [2024-07-10 14:39:09.083103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.685 [2024-07-10 14:39:09.083120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.685 [2024-07-10 14:39:09.083161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.685 qpair failed and we were unable to recover it. 00:36:59.685 [2024-07-10 14:39:09.092881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.685 [2024-07-10 14:39:09.093051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.685 [2024-07-10 14:39:09.093084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.686 [2024-07-10 14:39:09.093106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.686 [2024-07-10 14:39:09.093129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.686 [2024-07-10 14:39:09.093170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.686 qpair failed and we were unable to recover it. 00:36:59.686 [2024-07-10 14:39:09.102870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.686 [2024-07-10 14:39:09.103033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.686 [2024-07-10 14:39:09.103066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.686 [2024-07-10 14:39:09.103088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.686 [2024-07-10 14:39:09.103106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.686 [2024-07-10 14:39:09.103145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.686 qpair failed and we were unable to recover it. 00:36:59.686 [2024-07-10 14:39:09.112869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.686 [2024-07-10 14:39:09.113036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.686 [2024-07-10 14:39:09.113070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.686 [2024-07-10 14:39:09.113091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.686 [2024-07-10 14:39:09.113108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.686 [2024-07-10 14:39:09.113147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.686 qpair failed and we were unable to recover it. 00:36:59.686 [2024-07-10 14:39:09.122961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.686 [2024-07-10 14:39:09.123150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.686 [2024-07-10 14:39:09.123183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.686 [2024-07-10 14:39:09.123214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.686 [2024-07-10 14:39:09.123232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.686 [2024-07-10 14:39:09.123271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.686 qpair failed and we were unable to recover it. 00:36:59.686 [2024-07-10 14:39:09.132924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.686 [2024-07-10 14:39:09.133085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.686 [2024-07-10 14:39:09.133119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.686 [2024-07-10 14:39:09.133143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.686 [2024-07-10 14:39:09.133161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2a00 00:36:59.686 [2024-07-10 14:39:09.133209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:59.686 qpair failed and we were unable to recover it. 00:36:59.686 [2024-07-10 14:39:09.143004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.686 [2024-07-10 14:39:09.143180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.686 [2024-07-10 14:39:09.143222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.686 [2024-07-10 14:39:09.143248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.686 [2024-07-10 14:39:09.143267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000210000 00:36:59.686 [2024-07-10 14:39:09.143318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.686 qpair failed and we were unable to recover it. 00:36:59.686 [2024-07-10 14:39:09.153091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.686 [2024-07-10 14:39:09.153325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.686 [2024-07-10 14:39:09.153362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.686 [2024-07-10 14:39:09.153392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.686 [2024-07-10 14:39:09.153417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000210000 00:36:59.686 [2024-07-10 14:39:09.153478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.686 qpair failed and we were unable to recover it. 00:36:59.686 [2024-07-10 14:39:09.153864] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:36:59.686 A controller has encountered a failure and is being reset. 00:36:59.686 [2024-07-10 14:39:09.153942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.944 Controller properly reset. 00:36:59.944 Initializing NVMe Controllers 00:36:59.944 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:59.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:59.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:59.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:59.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:59.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:59.944 Initialization complete. Launching workers. 00:36:59.944 Starting thread on core 1 00:36:59.944 Starting thread on core 2 00:36:59.944 Starting thread on core 3 00:36:59.944 Starting thread on core 0 00:36:59.944 14:39:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:59.944 00:36:59.944 real 0m11.812s 00:36:59.944 user 0m20.294s 00:36:59.944 sys 0m5.327s 00:36:59.944 14:39:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:59.944 14:39:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:59.944 ************************************ 00:36:59.944 END TEST nvmf_target_disconnect_tc2 00:36:59.944 ************************************ 00:36:59.944 14:39:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:36:59.944 14:39:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:59.945 rmmod nvme_tcp 00:36:59.945 rmmod nvme_fabrics 00:36:59.945 rmmod nvme_keyring 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1555410 ']' 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1555410 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1555410 ']' 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1555410 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1555410 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1555410' 00:36:59.945 killing process with pid 1555410 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1555410 00:36:59.945 14:39:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1555410 00:37:01.318 14:39:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:01.318 14:39:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:01.318 14:39:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:01.318 14:39:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:01.318 14:39:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:01.318 14:39:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:01.318 14:39:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:01.318 14:39:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:03.218 14:39:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:03.218 00:37:03.218 real 0m17.713s 00:37:03.218 user 0m48.678s 00:37:03.218 sys 0m7.627s 00:37:03.218 14:39:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:03.218 14:39:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:03.218 ************************************ 00:37:03.218 END TEST nvmf_target_disconnect 00:37:03.218 ************************************ 00:37:03.476 14:39:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:37:03.476 14:39:12 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:37:03.476 14:39:12 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:03.476 14:39:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:03.476 14:39:12 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:37:03.476 00:37:03.476 real 28m57.107s 00:37:03.477 user 77m46.604s 00:37:03.477 sys 6m5.333s 00:37:03.477 14:39:12 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:03.477 14:39:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:03.477 ************************************ 00:37:03.477 END TEST nvmf_tcp 00:37:03.477 ************************************ 00:37:03.477 14:39:12 -- common/autotest_common.sh@1142 -- # return 0 00:37:03.477 14:39:12 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:37:03.477 14:39:12 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:03.477 14:39:12 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:03.477 14:39:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:03.477 14:39:12 -- common/autotest_common.sh@10 -- # set +x 00:37:03.477 ************************************ 00:37:03.477 START TEST spdkcli_nvmf_tcp 00:37:03.477 ************************************ 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:03.477 * Looking for test storage... 00:37:03.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1557242 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1557242 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1557242 ']' 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:03.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:03.477 14:39:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:03.477 [2024-07-10 14:39:12.953542] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:37:03.477 [2024-07-10 14:39:12.953679] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557242 ] 00:37:03.735 EAL: No free 2048 kB hugepages reported on node 1 00:37:03.735 [2024-07-10 14:39:13.076706] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:03.993 [2024-07-10 14:39:13.329987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:03.993 [2024-07-10 14:39:13.329994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:04.558 14:39:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:04.558 14:39:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:37:04.558 14:39:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:04.558 14:39:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:04.558 14:39:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:04.558 14:39:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:04.558 14:39:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:04.558 14:39:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:04.558 14:39:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:04.558 14:39:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:04.558 14:39:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:04.558 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:04.558 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:04.558 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:04.558 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:04.558 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:04.558 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:04.558 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:04.558 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:04.558 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:04.558 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:04.558 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:04.558 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:04.559 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:04.559 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:04.559 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:04.559 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:04.559 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:04.559 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:04.559 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:04.559 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:04.559 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:04.559 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:04.559 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:04.559 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:04.559 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:04.559 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:04.559 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:04.559 ' 00:37:07.839 [2024-07-10 14:39:16.580682] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:08.403 [2024-07-10 14:39:17.817920] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:10.929 [2024-07-10 14:39:20.105373] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:12.828 [2024-07-10 14:39:22.071691] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:14.200 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:14.200 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:14.200 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:14.200 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:14.200 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:14.200 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:14.200 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:14.200 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:14.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:14.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:14.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:14.200 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:14.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:14.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:14.200 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:14.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:14.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:14.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:14.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:14.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:14.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:14.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:14.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:14.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:14.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:14.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:14.200 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:14.200 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:14.457 14:39:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:14.457 14:39:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:14.457 14:39:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:14.457 14:39:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:14.457 14:39:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:14.457 14:39:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:14.457 14:39:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:14.458 14:39:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:14.715 14:39:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:14.715 14:39:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:14.715 14:39:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:14.715 14:39:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:14.715 14:39:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:14.715 14:39:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:14.716 14:39:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:14.716 14:39:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:14.716 14:39:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:14.716 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:14.716 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:14.716 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:14.716 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:14.716 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:14.716 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:14.716 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:14.716 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:14.716 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:14.716 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:14.716 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:14.716 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:14.716 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:14.716 ' 00:37:21.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:21.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:21.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:21.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:21.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:37:21.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:37:21.272 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:21.272 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:21.272 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:21.272 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:21.272 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:21.272 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:21.272 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:21.272 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:21.272 14:39:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:21.272 14:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:21.272 14:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:21.272 14:39:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1557242 00:37:21.272 14:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1557242 ']' 00:37:21.272 14:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1557242 00:37:21.272 14:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:37:21.272 14:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:21.272 14:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1557242 00:37:21.272 14:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:21.272 14:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:21.272 14:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1557242' 00:37:21.272 killing process with pid 1557242 00:37:21.272 14:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1557242 00:37:21.272 14:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1557242 00:37:21.839 14:39:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:37:21.839 14:39:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:37:21.839 14:39:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1557242 ']' 00:37:21.839 14:39:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1557242 00:37:21.839 14:39:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1557242 ']' 00:37:21.839 14:39:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1557242 00:37:21.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1557242) - No such process 00:37:21.839 14:39:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1557242 is not found' 00:37:21.839 Process with pid 1557242 is not found 00:37:21.839 14:39:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:21.839 14:39:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:21.839 14:39:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:21.839 00:37:21.839 real 0m18.318s 00:37:21.839 user 0m37.732s 00:37:21.839 sys 0m1.036s 00:37:21.839 14:39:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:21.839 14:39:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:21.839 ************************************ 00:37:21.839 END TEST spdkcli_nvmf_tcp 00:37:21.839 ************************************ 00:37:21.839 14:39:31 -- common/autotest_common.sh@1142 -- # return 0 00:37:21.839 14:39:31 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:21.839 14:39:31 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:21.839 14:39:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:21.839 14:39:31 -- common/autotest_common.sh@10 -- # set +x 00:37:21.839 ************************************ 00:37:21.839 START TEST nvmf_identify_passthru 00:37:21.839 ************************************ 00:37:21.839 14:39:31 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:21.839 * Looking for test storage... 00:37:21.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:21.839 14:39:31 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:21.839 14:39:31 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:21.839 14:39:31 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:21.839 14:39:31 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:21.839 14:39:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.839 14:39:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.839 14:39:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.839 14:39:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:21.839 14:39:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:21.839 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:21.839 14:39:31 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:21.839 14:39:31 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:21.839 14:39:31 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:21.839 14:39:31 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:21.839 14:39:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.840 14:39:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.840 14:39:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.840 14:39:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:21.840 14:39:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.840 14:39:31 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:37:21.840 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:21.840 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:21.840 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:21.840 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:21.840 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:21.840 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:21.840 14:39:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:21.840 14:39:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:21.840 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:21.840 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:21.840 14:39:31 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:37:21.840 14:39:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:23.762 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:23.762 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:37:23.762 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:23.762 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:23.762 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:23.762 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:23.762 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:23.762 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:37:23.762 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:23.762 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:37:23.762 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:37:23.762 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:37:23.762 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:37:23.762 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:37:23.762 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:37:23.762 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:23.762 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:23.762 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:23.762 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:23.762 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:23.763 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:23.763 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:23.763 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:23.763 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:23.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:23.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:37:23.763 00:37:23.763 --- 10.0.0.2 ping statistics --- 00:37:23.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:23.763 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:23.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:23.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:37:23.763 00:37:23.763 --- 10.0.0.1 ping statistics --- 00:37:23.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:23.763 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:23.763 14:39:33 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:23.763 14:39:33 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:37:23.763 14:39:33 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:23.763 14:39:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:23.763 14:39:33 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:37:23.763 14:39:33 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:37:23.763 14:39:33 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:37:23.763 14:39:33 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:37:23.763 14:39:33 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:37:23.763 14:39:33 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:37:23.763 14:39:33 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:37:23.763 14:39:33 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:23.763 14:39:33 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:23.763 14:39:33 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:37:23.763 14:39:33 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:37:23.763 14:39:33 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:37:23.763 14:39:33 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:37:23.763 14:39:33 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:37:23.763 14:39:33 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:37:23.763 14:39:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:37:23.763 14:39:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:37:23.763 14:39:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:37:24.021 EAL: No free 2048 kB hugepages reported on node 1 00:37:28.199 14:39:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:37:28.199 14:39:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:37:28.200 14:39:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:37:28.200 14:39:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:37:28.458 EAL: No free 2048 kB hugepages reported on node 1 00:37:32.641 14:39:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:37:32.641 14:39:41 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:37:32.641 14:39:41 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:32.641 14:39:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:32.641 14:39:41 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:37:32.641 14:39:41 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:32.641 14:39:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:32.641 14:39:41 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1562122 00:37:32.641 14:39:41 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:32.641 14:39:41 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:32.641 14:39:41 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1562122 00:37:32.641 14:39:41 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1562122 ']' 00:37:32.641 14:39:41 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:32.641 14:39:41 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:32.641 14:39:41 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:32.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:32.642 14:39:41 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:32.642 14:39:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:32.642 [2024-07-10 14:39:42.046625] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:37:32.642 [2024-07-10 14:39:42.046775] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:32.900 EAL: No free 2048 kB hugepages reported on node 1 00:37:32.900 [2024-07-10 14:39:42.180064] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:33.158 [2024-07-10 14:39:42.434396] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:33.158 [2024-07-10 14:39:42.434472] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:33.158 [2024-07-10 14:39:42.434500] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:33.158 [2024-07-10 14:39:42.434521] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:33.158 [2024-07-10 14:39:42.434541] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:33.158 [2024-07-10 14:39:42.434659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:33.158 [2024-07-10 14:39:42.434726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:33.158 [2024-07-10 14:39:42.434817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:33.158 [2024-07-10 14:39:42.434827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:37:33.723 14:39:42 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:33.723 14:39:42 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:37:33.723 14:39:42 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:37:33.723 14:39:42 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.723 14:39:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:33.723 INFO: Log level set to 20 00:37:33.723 INFO: Requests: 00:37:33.723 { 00:37:33.723 "jsonrpc": "2.0", 00:37:33.723 "method": "nvmf_set_config", 00:37:33.723 "id": 1, 00:37:33.723 "params": { 00:37:33.723 "admin_cmd_passthru": { 00:37:33.723 "identify_ctrlr": true 00:37:33.723 } 00:37:33.723 } 00:37:33.723 } 00:37:33.723 00:37:33.723 INFO: response: 00:37:33.723 { 00:37:33.723 "jsonrpc": "2.0", 00:37:33.723 "id": 1, 00:37:33.723 "result": true 00:37:33.723 } 00:37:33.723 00:37:33.723 14:39:42 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.723 14:39:42 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:37:33.723 14:39:42 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.723 14:39:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:33.723 INFO: Setting log level to 20 00:37:33.723 INFO: Setting log level to 20 00:37:33.723 INFO: Log level set to 20 00:37:33.723 INFO: Log level set to 20 00:37:33.723 INFO: Requests: 00:37:33.723 { 00:37:33.723 "jsonrpc": "2.0", 00:37:33.723 "method": "framework_start_init", 00:37:33.723 "id": 1 00:37:33.723 } 00:37:33.723 00:37:33.723 INFO: Requests: 00:37:33.723 { 00:37:33.723 "jsonrpc": "2.0", 00:37:33.723 "method": "framework_start_init", 00:37:33.723 "id": 1 00:37:33.723 } 00:37:33.723 00:37:33.980 [2024-07-10 14:39:43.293301] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:37:33.980 INFO: response: 00:37:33.980 { 00:37:33.980 "jsonrpc": "2.0", 00:37:33.980 "id": 1, 00:37:33.980 "result": true 00:37:33.980 } 00:37:33.980 00:37:33.980 INFO: response: 00:37:33.980 { 00:37:33.980 "jsonrpc": "2.0", 00:37:33.980 "id": 1, 00:37:33.980 "result": true 00:37:33.980 } 00:37:33.980 00:37:33.980 14:39:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.980 14:39:43 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:33.980 14:39:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.980 14:39:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:33.980 INFO: Setting log level to 40 00:37:33.980 INFO: Setting log level to 40 00:37:33.980 INFO: Setting log level to 40 00:37:33.980 [2024-07-10 14:39:43.306311] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:33.980 14:39:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.980 14:39:43 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:37:33.980 14:39:43 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:33.980 14:39:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:33.980 14:39:43 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:37:33.980 14:39:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.980 14:39:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:37.260 Nvme0n1 00:37:37.260 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.260 14:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:37:37.260 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.260 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:37.260 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.260 14:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:37.260 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.260 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:37.260 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.260 14:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:37.260 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.260 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:37.260 [2024-07-10 14:39:46.255844] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:37.260 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.260 14:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:37:37.260 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.260 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:37.260 [ 00:37:37.260 { 00:37:37.260 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:37:37.260 "subtype": "Discovery", 00:37:37.260 "listen_addresses": [], 00:37:37.260 "allow_any_host": true, 00:37:37.260 "hosts": [] 00:37:37.260 }, 00:37:37.260 { 00:37:37.260 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:37.260 "subtype": "NVMe", 00:37:37.260 "listen_addresses": [ 00:37:37.260 { 00:37:37.260 "trtype": "TCP", 00:37:37.260 "adrfam": "IPv4", 00:37:37.260 "traddr": "10.0.0.2", 00:37:37.260 "trsvcid": "4420" 00:37:37.260 } 00:37:37.260 ], 00:37:37.260 "allow_any_host": true, 00:37:37.260 "hosts": [], 00:37:37.260 "serial_number": "SPDK00000000000001", 00:37:37.260 "model_number": "SPDK bdev Controller", 00:37:37.260 "max_namespaces": 1, 00:37:37.260 "min_cntlid": 1, 00:37:37.260 "max_cntlid": 65519, 00:37:37.260 "namespaces": [ 00:37:37.260 { 00:37:37.260 "nsid": 1, 00:37:37.260 "bdev_name": "Nvme0n1", 00:37:37.260 "name": "Nvme0n1", 00:37:37.260 "nguid": "D9269943AF4B4B6FBB4D2D2A13788B7C", 00:37:37.260 "uuid": "d9269943-af4b-4b6f-bb4d-2d2a13788b7c" 00:37:37.260 } 00:37:37.260 ] 00:37:37.260 } 00:37:37.260 ] 00:37:37.260 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.260 14:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:37.260 14:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:37:37.260 14:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:37:37.260 EAL: No free 2048 kB hugepages reported on node 1 00:37:37.260 14:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:37:37.260 14:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:37.260 14:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:37:37.260 14:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:37:37.260 EAL: No free 2048 kB hugepages reported on node 1 00:37:37.260 14:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:37:37.260 14:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:37:37.260 14:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:37:37.260 14:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:37.260 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.260 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:37.531 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.531 14:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:37:37.531 14:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:37:37.531 14:39:46 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:37.531 14:39:46 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:37:37.531 14:39:46 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:37.531 14:39:46 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:37:37.531 14:39:46 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:37.531 14:39:46 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:37.531 rmmod nvme_tcp 00:37:37.531 rmmod nvme_fabrics 00:37:37.531 rmmod nvme_keyring 00:37:37.531 14:39:46 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:37.531 14:39:46 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:37:37.531 14:39:46 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:37:37.531 14:39:46 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1562122 ']' 00:37:37.531 14:39:46 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1562122 00:37:37.531 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1562122 ']' 00:37:37.531 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1562122 00:37:37.531 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:37:37.531 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:37.531 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1562122 00:37:37.531 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:37.531 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:37.531 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1562122' 00:37:37.531 killing process with pid 1562122 00:37:37.531 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1562122 00:37:37.531 14:39:46 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1562122 00:37:40.058 14:39:49 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:40.058 14:39:49 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:40.058 14:39:49 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:40.058 14:39:49 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:40.058 14:39:49 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:40.058 14:39:49 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:40.058 14:39:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:40.058 14:39:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:42.591 14:39:51 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:42.591 00:37:42.591 real 0m20.312s 00:37:42.591 user 0m33.084s 00:37:42.591 sys 0m2.586s 00:37:42.591 14:39:51 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:42.591 14:39:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:42.591 ************************************ 00:37:42.591 END TEST nvmf_identify_passthru 00:37:42.591 ************************************ 00:37:42.591 14:39:51 -- common/autotest_common.sh@1142 -- # return 0 00:37:42.591 14:39:51 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:42.591 14:39:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:42.591 14:39:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:42.591 14:39:51 -- common/autotest_common.sh@10 -- # set +x 00:37:42.591 ************************************ 00:37:42.591 START TEST nvmf_dif 00:37:42.591 ************************************ 00:37:42.591 14:39:51 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:42.591 * Looking for test storage... 00:37:42.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:42.591 14:39:51 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:42.591 14:39:51 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:37:42.591 14:39:51 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:42.591 14:39:51 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:42.591 14:39:51 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:42.591 14:39:51 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:42.591 14:39:51 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:42.592 14:39:51 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:42.592 14:39:51 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:42.592 14:39:51 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:42.592 14:39:51 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.592 14:39:51 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.592 14:39:51 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.592 14:39:51 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:37:42.592 14:39:51 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:42.592 14:39:51 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:37:42.592 14:39:51 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:37:42.592 14:39:51 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:37:42.592 14:39:51 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:37:42.592 14:39:51 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:42.592 14:39:51 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:42.592 14:39:51 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:42.592 14:39:51 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:37:42.592 14:39:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:44.493 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:44.493 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:44.493 14:39:53 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:44.494 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:44.494 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:44.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:44.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:37:44.494 00:37:44.494 --- 10.0.0.2 ping statistics --- 00:37:44.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:44.494 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:44.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:44.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:37:44.494 00:37:44.494 --- 10.0.0.1 ping statistics --- 00:37:44.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:44.494 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:37:44.494 14:39:53 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:45.428 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:37:45.428 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:37:45.428 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:37:45.428 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:37:45.428 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:37:45.428 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:37:45.428 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:37:45.428 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:37:45.428 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:37:45.428 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:37:45.428 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:37:45.428 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:37:45.428 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:37:45.428 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:37:45.428 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:37:45.428 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:37:45.428 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:37:45.428 14:39:54 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:45.428 14:39:54 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:45.428 14:39:54 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:45.428 14:39:54 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:45.428 14:39:54 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:45.428 14:39:54 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:45.428 14:39:54 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:37:45.428 14:39:54 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:37:45.428 14:39:54 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:45.428 14:39:54 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:45.428 14:39:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:45.428 14:39:54 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1565538 00:37:45.428 14:39:54 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:45.428 14:39:54 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1565538 00:37:45.428 14:39:54 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1565538 ']' 00:37:45.428 14:39:54 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:45.428 14:39:54 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:45.428 14:39:54 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:45.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:45.428 14:39:54 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:45.428 14:39:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:45.428 [2024-07-10 14:39:54.864768] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:37:45.428 [2024-07-10 14:39:54.864914] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:45.687 EAL: No free 2048 kB hugepages reported on node 1 00:37:45.687 [2024-07-10 14:39:54.994908] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:45.944 [2024-07-10 14:39:55.249700] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:45.944 [2024-07-10 14:39:55.249785] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:45.944 [2024-07-10 14:39:55.249814] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:45.944 [2024-07-10 14:39:55.249840] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:45.944 [2024-07-10 14:39:55.249863] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:45.944 [2024-07-10 14:39:55.249921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:46.509 14:39:55 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:46.509 14:39:55 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:37:46.509 14:39:55 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:46.509 14:39:55 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:46.509 14:39:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:46.509 14:39:55 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:46.509 14:39:55 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:37:46.509 14:39:55 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:37:46.509 14:39:55 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.509 14:39:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:46.509 [2024-07-10 14:39:55.842177] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:46.509 14:39:55 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.509 14:39:55 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:37:46.509 14:39:55 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:46.509 14:39:55 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:46.509 14:39:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:46.509 ************************************ 00:37:46.509 START TEST fio_dif_1_default 00:37:46.509 ************************************ 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:46.509 bdev_null0 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:46.509 [2024-07-10 14:39:55.898505] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:46.509 14:39:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:46.509 { 00:37:46.509 "params": { 00:37:46.510 "name": "Nvme$subsystem", 00:37:46.510 "trtype": "$TEST_TRANSPORT", 00:37:46.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:46.510 "adrfam": "ipv4", 00:37:46.510 "trsvcid": "$NVMF_PORT", 00:37:46.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:46.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:46.510 "hdgst": ${hdgst:-false}, 00:37:46.510 "ddgst": ${ddgst:-false} 00:37:46.510 }, 00:37:46.510 "method": "bdev_nvme_attach_controller" 00:37:46.510 } 00:37:46.510 EOF 00:37:46.510 )") 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:46.510 "params": { 00:37:46.510 "name": "Nvme0", 00:37:46.510 "trtype": "tcp", 00:37:46.510 "traddr": "10.0.0.2", 00:37:46.510 "adrfam": "ipv4", 00:37:46.510 "trsvcid": "4420", 00:37:46.510 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:46.510 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:46.510 "hdgst": false, 00:37:46.510 "ddgst": false 00:37:46.510 }, 00:37:46.510 "method": "bdev_nvme_attach_controller" 00:37:46.510 }' 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:46.510 14:39:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:46.766 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:46.766 fio-3.35 00:37:46.766 Starting 1 thread 00:37:47.023 EAL: No free 2048 kB hugepages reported on node 1 00:37:59.219 00:37:59.219 filename0: (groupid=0, jobs=1): err= 0: pid=1565889: Wed Jul 10 14:40:07 2024 00:37:59.219 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10039msec) 00:37:59.219 slat (usec): min=6, max=218, avg=15.91, stdev= 8.42 00:37:59.219 clat (usec): min=41727, max=42270, avg=41954.00, stdev=36.53 00:37:59.219 lat (usec): min=41743, max=42295, avg=41969.92, stdev=36.69 00:37:59.219 clat percentiles (usec): 00:37:59.219 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:37:59.219 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:37:59.219 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:59.219 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:59.219 | 99.99th=[42206] 00:37:59.219 bw ( KiB/s): min= 352, max= 384, per=99.76%, avg=380.80, stdev= 9.85, samples=20 00:37:59.219 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:37:59.219 lat (msec) : 50=100.00% 00:37:59.219 cpu : usr=92.01%, sys=7.45%, ctx=14, majf=0, minf=1636 00:37:59.219 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:59.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.219 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.219 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:59.219 00:37:59.219 Run status group 0 (all jobs): 00:37:59.219 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3824KiB (3916kB), run=10039-10039msec 00:37:59.219 ----------------------------------------------------- 00:37:59.219 Suppressions used: 00:37:59.219 count bytes template 00:37:59.219 1 8 /usr/src/fio/parse.c 00:37:59.219 1 8 libtcmalloc_minimal.so 00:37:59.219 1 904 libcrypto.so 00:37:59.219 ----------------------------------------------------- 00:37:59.219 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.219 00:37:59.219 real 0m12.341s 00:37:59.219 user 0m11.503s 00:37:59.219 sys 0m1.182s 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:59.219 ************************************ 00:37:59.219 END TEST fio_dif_1_default 00:37:59.219 ************************************ 00:37:59.219 14:40:08 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:37:59.219 14:40:08 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:59.219 14:40:08 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:59.219 14:40:08 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:59.219 14:40:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:59.219 ************************************ 00:37:59.219 START TEST fio_dif_1_multi_subsystems 00:37:59.219 ************************************ 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:59.219 bdev_null0 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:59.219 [2024-07-10 14:40:08.296570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:59.219 bdev_null1 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:59.219 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:59.220 { 00:37:59.220 "params": { 00:37:59.220 "name": "Nvme$subsystem", 00:37:59.220 "trtype": "$TEST_TRANSPORT", 00:37:59.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:59.220 "adrfam": "ipv4", 00:37:59.220 "trsvcid": "$NVMF_PORT", 00:37:59.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:59.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:59.220 "hdgst": ${hdgst:-false}, 00:37:59.220 "ddgst": ${ddgst:-false} 00:37:59.220 }, 00:37:59.220 "method": "bdev_nvme_attach_controller" 00:37:59.220 } 00:37:59.220 EOF 00:37:59.220 )") 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:59.220 { 00:37:59.220 "params": { 00:37:59.220 "name": "Nvme$subsystem", 00:37:59.220 "trtype": "$TEST_TRANSPORT", 00:37:59.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:59.220 "adrfam": "ipv4", 00:37:59.220 "trsvcid": "$NVMF_PORT", 00:37:59.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:59.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:59.220 "hdgst": ${hdgst:-false}, 00:37:59.220 "ddgst": ${ddgst:-false} 00:37:59.220 }, 00:37:59.220 "method": "bdev_nvme_attach_controller" 00:37:59.220 } 00:37:59.220 EOF 00:37:59.220 )") 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:59.220 "params": { 00:37:59.220 "name": "Nvme0", 00:37:59.220 "trtype": "tcp", 00:37:59.220 "traddr": "10.0.0.2", 00:37:59.220 "adrfam": "ipv4", 00:37:59.220 "trsvcid": "4420", 00:37:59.220 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:59.220 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:59.220 "hdgst": false, 00:37:59.220 "ddgst": false 00:37:59.220 }, 00:37:59.220 "method": "bdev_nvme_attach_controller" 00:37:59.220 },{ 00:37:59.220 "params": { 00:37:59.220 "name": "Nvme1", 00:37:59.220 "trtype": "tcp", 00:37:59.220 "traddr": "10.0.0.2", 00:37:59.220 "adrfam": "ipv4", 00:37:59.220 "trsvcid": "4420", 00:37:59.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:59.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:59.220 "hdgst": false, 00:37:59.220 "ddgst": false 00:37:59.220 }, 00:37:59.220 "method": "bdev_nvme_attach_controller" 00:37:59.220 }' 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:59.220 14:40:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:59.220 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:59.220 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:59.220 fio-3.35 00:37:59.220 Starting 2 threads 00:37:59.478 EAL: No free 2048 kB hugepages reported on node 1 00:38:11.676 00:38:11.676 filename0: (groupid=0, jobs=1): err= 0: pid=1567413: Wed Jul 10 14:40:19 2024 00:38:11.676 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10019msec) 00:38:11.676 slat (usec): min=5, max=206, avg=15.46, stdev= 6.78 00:38:11.676 clat (usec): min=905, max=43274, avg=21544.95, stdev=20469.94 00:38:11.676 lat (usec): min=918, max=43291, avg=21560.40, stdev=20470.29 00:38:11.676 clat percentiles (usec): 00:38:11.676 | 1.00th=[ 930], 5.00th=[ 938], 10.00th=[ 955], 20.00th=[ 971], 00:38:11.676 | 30.00th=[ 996], 40.00th=[ 1029], 50.00th=[41157], 60.00th=[41681], 00:38:11.676 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:38:11.676 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:38:11.676 | 99.99th=[43254] 00:38:11.676 bw ( KiB/s): min= 672, max= 768, per=66.04%, avg=740.80, stdev=34.86, samples=20 00:38:11.676 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:38:11.676 lat (usec) : 1000=31.41% 00:38:11.676 lat (msec) : 2=18.37%, 50=50.22% 00:38:11.676 cpu : usr=93.46%, sys=6.04%, ctx=14, majf=0, minf=1638 00:38:11.676 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:11.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:11.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:11.677 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:11.677 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:11.677 filename1: (groupid=0, jobs=1): err= 0: pid=1567414: Wed Jul 10 14:40:19 2024 00:38:11.677 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10038msec) 00:38:11.677 slat (nsec): min=5789, max=61766, avg=15715.32, stdev=5128.17 00:38:11.677 clat (usec): min=40996, max=42571, avg=41950.52, stdev=80.24 00:38:11.677 lat (usec): min=41008, max=42596, avg=41966.23, stdev=80.84 00:38:11.677 clat percentiles (usec): 00:38:11.677 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:38:11.677 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:38:11.677 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:11.677 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:38:11.677 | 99.99th=[42730] 00:38:11.677 bw ( KiB/s): min= 352, max= 384, per=33.91%, avg=380.80, stdev= 9.85, samples=20 00:38:11.677 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:38:11.677 lat (msec) : 50=100.00% 00:38:11.677 cpu : usr=93.06%, sys=6.24%, ctx=48, majf=0, minf=1637 00:38:11.677 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:11.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:11.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:11.677 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:11.677 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:11.677 00:38:11.677 Run status group 0 (all jobs): 00:38:11.677 READ: bw=1121KiB/s (1147kB/s), 381KiB/s-741KiB/s (390kB/s-759kB/s), io=11.0MiB (11.5MB), run=10019-10038msec 00:38:11.677 ----------------------------------------------------- 00:38:11.677 Suppressions used: 00:38:11.677 count bytes template 00:38:11.677 2 16 /usr/src/fio/parse.c 00:38:11.677 1 8 libtcmalloc_minimal.so 00:38:11.677 1 904 libcrypto.so 00:38:11.677 ----------------------------------------------------- 00:38:11.677 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:11.677 00:38:11.677 real 0m12.732s 00:38:11.677 user 0m21.348s 00:38:11.677 sys 0m1.663s 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:11.677 14:40:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:11.677 ************************************ 00:38:11.677 END TEST fio_dif_1_multi_subsystems 00:38:11.677 ************************************ 00:38:11.677 14:40:21 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:38:11.677 14:40:21 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:11.677 14:40:21 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:11.677 14:40:21 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:11.677 14:40:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:11.677 ************************************ 00:38:11.677 START TEST fio_dif_rand_params 00:38:11.677 ************************************ 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.677 bdev_null0 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.677 [2024-07-10 14:40:21.076392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:11.677 14:40:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:11.678 { 00:38:11.678 "params": { 00:38:11.678 "name": "Nvme$subsystem", 00:38:11.678 "trtype": "$TEST_TRANSPORT", 00:38:11.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:11.678 "adrfam": "ipv4", 00:38:11.678 "trsvcid": "$NVMF_PORT", 00:38:11.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:11.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:11.678 "hdgst": ${hdgst:-false}, 00:38:11.678 "ddgst": ${ddgst:-false} 00:38:11.678 }, 00:38:11.678 "method": "bdev_nvme_attach_controller" 00:38:11.678 } 00:38:11.678 EOF 00:38:11.678 )") 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:11.678 "params": { 00:38:11.678 "name": "Nvme0", 00:38:11.678 "trtype": "tcp", 00:38:11.678 "traddr": "10.0.0.2", 00:38:11.678 "adrfam": "ipv4", 00:38:11.678 "trsvcid": "4420", 00:38:11.678 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:11.678 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:11.678 "hdgst": false, 00:38:11.678 "ddgst": false 00:38:11.678 }, 00:38:11.678 "method": "bdev_nvme_attach_controller" 00:38:11.678 }' 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:11.678 14:40:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:11.936 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:11.936 ... 00:38:11.936 fio-3.35 00:38:11.936 Starting 3 threads 00:38:12.193 EAL: No free 2048 kB hugepages reported on node 1 00:38:18.746 00:38:18.746 filename0: (groupid=0, jobs=1): err= 0: pid=1568931: Wed Jul 10 14:40:27 2024 00:38:18.746 read: IOPS=195, BW=24.4MiB/s (25.6MB/s)(122MiB/5006msec) 00:38:18.746 slat (nsec): min=6894, max=37039, avg=19755.72, stdev=3006.34 00:38:18.746 clat (usec): min=5235, max=89780, avg=15323.86, stdev=12719.64 00:38:18.746 lat (usec): min=5255, max=89800, avg=15343.62, stdev=12719.65 00:38:18.746 clat percentiles (usec): 00:38:18.746 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 7242], 20.00th=[ 9372], 00:38:18.746 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11207], 60.00th=[12387], 00:38:18.746 | 70.00th=[14091], 80.00th=[15270], 90.00th=[19006], 95.00th=[52691], 00:38:18.746 | 99.00th=[58459], 99.50th=[58459], 99.90th=[89654], 99.95th=[89654], 00:38:18.746 | 99.99th=[89654] 00:38:18.746 bw ( KiB/s): min=19968, max=32512, per=35.74%, avg=24985.60, stdev=4383.18, samples=10 00:38:18.746 iops : min= 156, max= 254, avg=195.20, stdev=34.24, samples=10 00:38:18.746 lat (msec) : 10=30.06%, 20=60.22%, 50=1.53%, 100=8.18% 00:38:18.746 cpu : usr=92.77%, sys=6.65%, ctx=6, majf=0, minf=1635 00:38:18.746 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:18.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.746 issued rwts: total=978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.746 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:18.746 filename0: (groupid=0, jobs=1): err= 0: pid=1568932: Wed Jul 10 14:40:27 2024 00:38:18.746 read: IOPS=167, BW=20.9MiB/s (21.9MB/s)(105MiB/5005msec) 00:38:18.746 slat (nsec): min=6568, max=44998, avg=20232.41, stdev=3989.02 00:38:18.746 clat (usec): min=6040, max=61473, avg=17886.50, stdev=14926.56 00:38:18.746 lat (usec): min=6060, max=61493, avg=17906.74, stdev=14926.69 00:38:18.746 clat percentiles (usec): 00:38:18.746 | 1.00th=[ 6718], 5.00th=[ 7308], 10.00th=[ 7898], 20.00th=[ 9765], 00:38:18.746 | 30.00th=[10552], 40.00th=[11207], 50.00th=[12256], 60.00th=[13698], 00:38:18.746 | 70.00th=[14877], 80.00th=[16712], 90.00th=[52167], 95.00th=[54789], 00:38:18.746 | 99.00th=[56886], 99.50th=[57410], 99.90th=[61604], 99.95th=[61604], 00:38:18.746 | 99.99th=[61604] 00:38:18.746 bw ( KiB/s): min=16640, max=25600, per=30.62%, avg=21406.30, stdev=2749.11, samples=10 00:38:18.746 iops : min= 130, max= 200, avg=167.20, stdev=21.44, samples=10 00:38:18.746 lat (msec) : 10=23.27%, 20=61.93%, 50=1.55%, 100=13.25% 00:38:18.746 cpu : usr=92.71%, sys=6.73%, ctx=7, majf=0, minf=1637 00:38:18.746 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:18.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.746 issued rwts: total=838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.746 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:18.746 filename0: (groupid=0, jobs=1): err= 0: pid=1568933: Wed Jul 10 14:40:27 2024 00:38:18.746 read: IOPS=183, BW=22.9MiB/s (24.1MB/s)(115MiB/5008msec) 00:38:18.746 slat (nsec): min=5526, max=82292, avg=22687.22, stdev=5540.17 00:38:18.746 clat (usec): min=5999, max=57454, avg=16309.22, stdev=13881.93 00:38:18.746 lat (usec): min=6018, max=57483, avg=16331.90, stdev=13882.71 00:38:18.746 clat percentiles (usec): 00:38:18.746 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 7111], 20.00th=[ 9110], 00:38:18.746 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[11469], 60.00th=[12649], 00:38:18.746 | 70.00th=[13960], 80.00th=[15270], 90.00th=[50594], 95.00th=[53216], 00:38:18.746 | 99.00th=[54789], 99.50th=[54789], 99.90th=[57410], 99.95th=[57410], 00:38:18.746 | 99.99th=[57410] 00:38:18.746 bw ( KiB/s): min=17152, max=29440, per=33.58%, avg=23475.20, stdev=4197.93, samples=10 00:38:18.746 iops : min= 134, max= 230, avg=183.40, stdev=32.80, samples=10 00:38:18.746 lat (msec) : 10=31.34%, 20=56.04%, 50=1.63%, 100=10.99% 00:38:18.746 cpu : usr=89.65%, sys=8.09%, ctx=317, majf=0, minf=1637 00:38:18.746 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:18.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.746 issued rwts: total=919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.746 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:18.746 00:38:18.746 Run status group 0 (all jobs): 00:38:18.747 READ: bw=68.3MiB/s (71.6MB/s), 20.9MiB/s-24.4MiB/s (21.9MB/s-25.6MB/s), io=342MiB (358MB), run=5005-5008msec 00:38:19.006 ----------------------------------------------------- 00:38:19.006 Suppressions used: 00:38:19.006 count bytes template 00:38:19.006 5 44 /usr/src/fio/parse.c 00:38:19.006 1 8 libtcmalloc_minimal.so 00:38:19.006 1 904 libcrypto.so 00:38:19.006 ----------------------------------------------------- 00:38:19.006 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.006 bdev_null0 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.006 [2024-07-10 14:40:28.360232] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.006 bdev_null1 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.006 bdev_null2 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:19.006 { 00:38:19.006 "params": { 00:38:19.006 "name": "Nvme$subsystem", 00:38:19.006 "trtype": "$TEST_TRANSPORT", 00:38:19.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:19.006 "adrfam": "ipv4", 00:38:19.006 "trsvcid": "$NVMF_PORT", 00:38:19.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:19.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:19.006 "hdgst": ${hdgst:-false}, 00:38:19.006 "ddgst": ${ddgst:-false} 00:38:19.006 }, 00:38:19.006 "method": "bdev_nvme_attach_controller" 00:38:19.006 } 00:38:19.006 EOF 00:38:19.006 )") 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:19.006 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:19.007 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:19.007 14:40:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:19.007 14:40:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:19.007 { 00:38:19.007 "params": { 00:38:19.007 "name": "Nvme$subsystem", 00:38:19.007 "trtype": "$TEST_TRANSPORT", 00:38:19.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:19.007 "adrfam": "ipv4", 00:38:19.007 "trsvcid": "$NVMF_PORT", 00:38:19.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:19.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:19.007 "hdgst": ${hdgst:-false}, 00:38:19.007 "ddgst": ${ddgst:-false} 00:38:19.007 }, 00:38:19.007 "method": "bdev_nvme_attach_controller" 00:38:19.007 } 00:38:19.007 EOF 00:38:19.007 )") 00:38:19.007 14:40:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:19.007 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:19.007 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:19.007 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:19.007 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:19.007 14:40:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:19.007 14:40:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:19.007 14:40:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:19.007 { 00:38:19.007 "params": { 00:38:19.007 "name": "Nvme$subsystem", 00:38:19.007 "trtype": "$TEST_TRANSPORT", 00:38:19.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:19.007 "adrfam": "ipv4", 00:38:19.007 "trsvcid": "$NVMF_PORT", 00:38:19.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:19.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:19.007 "hdgst": ${hdgst:-false}, 00:38:19.007 "ddgst": ${ddgst:-false} 00:38:19.007 }, 00:38:19.007 "method": "bdev_nvme_attach_controller" 00:38:19.007 } 00:38:19.007 EOF 00:38:19.007 )") 00:38:19.007 14:40:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:19.007 14:40:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:19.007 14:40:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:19.007 14:40:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:19.007 "params": { 00:38:19.007 "name": "Nvme0", 00:38:19.007 "trtype": "tcp", 00:38:19.007 "traddr": "10.0.0.2", 00:38:19.007 "adrfam": "ipv4", 00:38:19.007 "trsvcid": "4420", 00:38:19.007 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:19.007 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:19.007 "hdgst": false, 00:38:19.007 "ddgst": false 00:38:19.007 }, 00:38:19.007 "method": "bdev_nvme_attach_controller" 00:38:19.007 },{ 00:38:19.007 "params": { 00:38:19.007 "name": "Nvme1", 00:38:19.007 "trtype": "tcp", 00:38:19.007 "traddr": "10.0.0.2", 00:38:19.007 "adrfam": "ipv4", 00:38:19.007 "trsvcid": "4420", 00:38:19.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:19.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:19.007 "hdgst": false, 00:38:19.007 "ddgst": false 00:38:19.007 }, 00:38:19.007 "method": "bdev_nvme_attach_controller" 00:38:19.007 },{ 00:38:19.007 "params": { 00:38:19.007 "name": "Nvme2", 00:38:19.007 "trtype": "tcp", 00:38:19.007 "traddr": "10.0.0.2", 00:38:19.007 "adrfam": "ipv4", 00:38:19.007 "trsvcid": "4420", 00:38:19.007 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:19.007 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:38:19.007 "hdgst": false, 00:38:19.007 "ddgst": false 00:38:19.007 }, 00:38:19.007 "method": "bdev_nvme_attach_controller" 00:38:19.007 }' 00:38:19.007 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:19.007 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:19.007 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:38:19.007 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:19.007 14:40:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:19.265 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:19.265 ... 00:38:19.265 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:19.265 ... 00:38:19.265 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:19.265 ... 00:38:19.265 fio-3.35 00:38:19.265 Starting 24 threads 00:38:19.523 EAL: No free 2048 kB hugepages reported on node 1 00:38:31.803 00:38:31.803 filename0: (groupid=0, jobs=1): err= 0: pid=1569909: Wed Jul 10 14:40:40 2024 00:38:31.803 read: IOPS=338, BW=1355KiB/s (1387kB/s)(13.2MiB/10016msec) 00:38:31.803 slat (usec): min=13, max=123, avg=39.59, stdev=12.02 00:38:31.803 clat (usec): min=27429, max=93202, avg=46881.03, stdev=2995.63 00:38:31.803 lat (usec): min=27459, max=93230, avg=46920.62, stdev=2994.05 00:38:31.803 clat percentiles (usec): 00:38:31.803 | 1.00th=[44303], 5.00th=[44827], 10.00th=[44827], 20.00th=[45876], 00:38:31.803 | 30.00th=[46400], 40.00th=[46924], 50.00th=[46924], 60.00th=[47449], 00:38:31.803 | 70.00th=[47449], 80.00th=[47973], 90.00th=[47973], 95.00th=[48497], 00:38:31.803 | 99.00th=[49546], 99.50th=[54789], 99.90th=[79168], 99.95th=[92799], 00:38:31.803 | 99.99th=[92799] 00:38:31.803 bw ( KiB/s): min= 1280, max= 1408, per=4.14%, avg=1354.11, stdev=64.93, samples=19 00:38:31.803 iops : min= 320, max= 352, avg=338.53, stdev=16.23, samples=19 00:38:31.803 lat (msec) : 50=99.06%, 100=0.94% 00:38:31.803 cpu : usr=97.95%, sys=1.47%, ctx=44, majf=0, minf=1633 00:38:31.803 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:31.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.803 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.803 issued rwts: total=3392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.803 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.803 filename0: (groupid=0, jobs=1): err= 0: pid=1569910: Wed Jul 10 14:40:40 2024 00:38:31.803 read: IOPS=338, BW=1354KiB/s (1387kB/s)(13.2MiB/10017msec) 00:38:31.803 slat (nsec): min=12396, max=83091, avg=36807.95, stdev=14715.38 00:38:31.803 clat (usec): min=27390, max=83681, avg=46905.47, stdev=3269.78 00:38:31.803 lat (usec): min=27408, max=83709, avg=46942.28, stdev=3265.89 00:38:31.803 clat percentiles (usec): 00:38:31.803 | 1.00th=[40633], 5.00th=[44303], 10.00th=[44827], 20.00th=[45351], 00:38:31.803 | 30.00th=[46400], 40.00th=[46400], 50.00th=[46924], 60.00th=[47449], 00:38:31.803 | 70.00th=[47449], 80.00th=[47973], 90.00th=[48497], 95.00th=[48497], 00:38:31.803 | 99.00th=[53740], 99.50th=[66847], 99.90th=[83362], 99.95th=[83362], 00:38:31.803 | 99.99th=[83362] 00:38:31.803 bw ( KiB/s): min= 1280, max= 1408, per=4.12%, avg=1347.47, stdev=65.55, samples=19 00:38:31.804 iops : min= 320, max= 352, avg=336.84, stdev=16.42, samples=19 00:38:31.804 lat (msec) : 50=98.88%, 100=1.12% 00:38:31.804 cpu : usr=97.68%, sys=1.83%, ctx=15, majf=0, minf=1635 00:38:31.804 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:31.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.804 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.804 issued rwts: total=3392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.804 filename0: (groupid=0, jobs=1): err= 0: pid=1569911: Wed Jul 10 14:40:40 2024 00:38:31.804 read: IOPS=338, BW=1354KiB/s (1387kB/s)(13.2MiB/10017msec) 00:38:31.804 slat (usec): min=6, max=109, avg=38.69, stdev=11.05 00:38:31.804 clat (usec): min=32774, max=73681, avg=46904.71, stdev=2448.10 00:38:31.804 lat (usec): min=32796, max=73710, avg=46943.40, stdev=2445.33 00:38:31.804 clat percentiles (usec): 00:38:31.804 | 1.00th=[44303], 5.00th=[44827], 10.00th=[44827], 20.00th=[45876], 00:38:31.804 | 30.00th=[46400], 40.00th=[46924], 50.00th=[46924], 60.00th=[46924], 00:38:31.804 | 70.00th=[47449], 80.00th=[47973], 90.00th=[47973], 95.00th=[48497], 00:38:31.804 | 99.00th=[54264], 99.50th=[64226], 99.90th=[73925], 99.95th=[73925], 00:38:31.804 | 99.99th=[73925] 00:38:31.804 bw ( KiB/s): min= 1280, max= 1408, per=4.14%, avg=1354.11, stdev=64.93, samples=19 00:38:31.804 iops : min= 320, max= 352, avg=338.53, stdev=16.23, samples=19 00:38:31.804 lat (msec) : 50=99.00%, 100=1.00% 00:38:31.804 cpu : usr=98.07%, sys=1.42%, ctx=17, majf=0, minf=1635 00:38:31.804 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:31.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.804 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.804 issued rwts: total=3392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.804 filename0: (groupid=0, jobs=1): err= 0: pid=1569912: Wed Jul 10 14:40:40 2024 00:38:31.804 read: IOPS=339, BW=1360KiB/s (1393kB/s)(13.3MiB/10024msec) 00:38:31.804 slat (nsec): min=8803, max=95732, avg=34225.77, stdev=18064.44 00:38:31.804 clat (usec): min=25547, max=57086, avg=46758.53, stdev=2053.22 00:38:31.804 lat (usec): min=25603, max=57148, avg=46792.75, stdev=2046.49 00:38:31.804 clat percentiles (usec): 00:38:31.804 | 1.00th=[43254], 5.00th=[44303], 10.00th=[44827], 20.00th=[45876], 00:38:31.804 | 30.00th=[46400], 40.00th=[46924], 50.00th=[46924], 60.00th=[47449], 00:38:31.804 | 70.00th=[47449], 80.00th=[47973], 90.00th=[48497], 95.00th=[48497], 00:38:31.804 | 99.00th=[49021], 99.50th=[55837], 99.90th=[56886], 99.95th=[56886], 00:38:31.804 | 99.99th=[56886] 00:38:31.804 bw ( KiB/s): min= 1280, max= 1408, per=4.14%, avg=1356.80, stdev=64.34, samples=20 00:38:31.804 iops : min= 320, max= 352, avg=339.20, stdev=16.08, samples=20 00:38:31.804 lat (msec) : 50=99.41%, 100=0.59% 00:38:31.804 cpu : usr=97.66%, sys=1.85%, ctx=16, majf=0, minf=1637 00:38:31.804 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:31.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.804 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.804 issued rwts: total=3408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.804 filename0: (groupid=0, jobs=1): err= 0: pid=1569913: Wed Jul 10 14:40:40 2024 00:38:31.804 read: IOPS=344, BW=1378KiB/s (1411kB/s)(13.5MiB/10033msec) 00:38:31.804 slat (nsec): min=10276, max=97023, avg=35099.76, stdev=20756.24 00:38:31.804 clat (usec): min=3935, max=56977, avg=46125.82, stdev=5017.79 00:38:31.804 lat (usec): min=3962, max=57035, avg=46160.92, stdev=5016.49 00:38:31.804 clat percentiles (usec): 00:38:31.804 | 1.00th=[14091], 5.00th=[44303], 10.00th=[44827], 20.00th=[45351], 00:38:31.804 | 30.00th=[46400], 40.00th=[46924], 50.00th=[46924], 60.00th=[47449], 00:38:31.804 | 70.00th=[47449], 80.00th=[47973], 90.00th=[48497], 95.00th=[48497], 00:38:31.804 | 99.00th=[49021], 99.50th=[49546], 99.90th=[56886], 99.95th=[56886], 00:38:31.804 | 99.99th=[56886] 00:38:31.804 bw ( KiB/s): min= 1280, max= 1792, per=4.20%, avg=1376.00, stdev=116.54, samples=20 00:38:31.804 iops : min= 320, max= 448, avg=344.00, stdev=29.13, samples=20 00:38:31.804 lat (msec) : 4=0.09%, 10=0.38%, 20=0.98%, 50=98.09%, 100=0.46% 00:38:31.804 cpu : usr=97.81%, sys=1.67%, ctx=37, majf=0, minf=1636 00:38:31.804 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:31.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.804 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.804 issued rwts: total=3456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.804 filename0: (groupid=0, jobs=1): err= 0: pid=1569914: Wed Jul 10 14:40:40 2024 00:38:31.804 read: IOPS=341, BW=1366KiB/s (1399kB/s)(13.4MiB/10027msec) 00:38:31.804 slat (usec): min=9, max=211, avg=42.42, stdev=23.42 00:38:31.804 clat (usec): min=20616, max=58061, avg=46466.34, stdev=3026.85 00:38:31.804 lat (usec): min=20639, max=58099, avg=46508.76, stdev=3029.65 00:38:31.804 clat percentiles (usec): 00:38:31.804 | 1.00th=[25822], 5.00th=[44827], 10.00th=[44827], 20.00th=[45876], 00:38:31.804 | 30.00th=[46400], 40.00th=[46400], 50.00th=[46924], 60.00th=[46924], 00:38:31.804 | 70.00th=[47449], 80.00th=[47449], 90.00th=[47973], 95.00th=[48497], 00:38:31.804 | 99.00th=[49021], 99.50th=[56886], 99.90th=[57934], 99.95th=[57934], 00:38:31.804 | 99.99th=[57934] 00:38:31.804 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1363.20, stdev=75.15, samples=20 00:38:31.804 iops : min= 320, max= 384, avg=340.80, stdev=18.79, samples=20 00:38:31.804 lat (msec) : 50=99.36%, 100=0.64% 00:38:31.804 cpu : usr=97.94%, sys=1.57%, ctx=13, majf=0, minf=1635 00:38:31.804 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:31.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.804 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.804 issued rwts: total=3424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.804 filename0: (groupid=0, jobs=1): err= 0: pid=1569915: Wed Jul 10 14:40:40 2024 00:38:31.804 read: IOPS=338, BW=1355KiB/s (1388kB/s)(13.3MiB/10018msec) 00:38:31.804 slat (usec): min=11, max=102, avg=25.04, stdev=12.03 00:38:31.804 clat (msec): min=26, max=118, avg=47.12, stdev= 4.75 00:38:31.804 lat (msec): min=26, max=118, avg=47.14, stdev= 4.75 00:38:31.804 clat percentiles (msec): 00:38:31.804 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 46], 20.00th=[ 46], 00:38:31.804 | 30.00th=[ 47], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 48], 00:38:31.804 | 70.00th=[ 48], 80.00th=[ 48], 90.00th=[ 49], 95.00th=[ 50], 00:38:31.804 | 99.00th=[ 57], 99.50th=[ 62], 99.90th=[ 103], 99.95th=[ 118], 00:38:31.804 | 99.99th=[ 118] 00:38:31.804 bw ( KiB/s): min= 1248, max= 1408, per=4.13%, avg=1352.42, stdev=48.99, samples=19 00:38:31.804 iops : min= 312, max= 352, avg=338.11, stdev=12.25, samples=19 00:38:31.804 lat (msec) : 50=98.29%, 100=1.24%, 250=0.47% 00:38:31.804 cpu : usr=97.95%, sys=1.58%, ctx=17, majf=0, minf=1635 00:38:31.804 IO depths : 1=0.1%, 2=0.1%, 4=0.6%, 8=80.8%, 16=18.5%, 32=0.0%, >=64=0.0% 00:38:31.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.804 complete : 0=0.0%, 4=89.5%, 8=10.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.804 issued rwts: total=3394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.804 filename0: (groupid=0, jobs=1): err= 0: pid=1569916: Wed Jul 10 14:40:40 2024 00:38:31.804 read: IOPS=338, BW=1355KiB/s (1388kB/s)(13.2MiB/10013msec) 00:38:31.804 slat (usec): min=9, max=221, avg=32.23, stdev= 9.28 00:38:31.804 clat (usec): min=25541, max=80693, avg=46943.22, stdev=3088.88 00:38:31.804 lat (usec): min=25600, max=80720, avg=46975.45, stdev=3087.22 00:38:31.804 clat percentiles (usec): 00:38:31.804 | 1.00th=[44303], 5.00th=[44827], 10.00th=[44827], 20.00th=[45876], 00:38:31.804 | 30.00th=[46400], 40.00th=[46924], 50.00th=[46924], 60.00th=[47449], 00:38:31.804 | 70.00th=[47449], 80.00th=[47973], 90.00th=[48497], 95.00th=[48497], 00:38:31.804 | 99.00th=[56886], 99.50th=[57934], 99.90th=[80217], 99.95th=[80217], 00:38:31.804 | 99.99th=[80217] 00:38:31.804 bw ( KiB/s): min= 1280, max= 1408, per=4.14%, avg=1354.11, stdev=64.93, samples=19 00:38:31.804 iops : min= 320, max= 352, avg=338.53, stdev=16.23, samples=19 00:38:31.804 lat (msec) : 50=98.82%, 100=1.18% 00:38:31.804 cpu : usr=97.85%, sys=1.60%, ctx=18, majf=0, minf=1637 00:38:31.804 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:31.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.804 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.804 issued rwts: total=3392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.804 filename1: (groupid=0, jobs=1): err= 0: pid=1569917: Wed Jul 10 14:40:40 2024 00:38:31.804 read: IOPS=337, BW=1350KiB/s (1383kB/s)(13.2MiB/10002msec) 00:38:31.804 slat (nsec): min=14696, max=89929, avg=38070.35, stdev=10490.75 00:38:31.804 clat (msec): min=30, max=107, avg=47.05, stdev= 3.70 00:38:31.804 lat (msec): min=30, max=107, avg=47.09, stdev= 3.70 00:38:31.804 clat percentiles (msec): 00:38:31.804 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 46], 00:38:31.804 | 30.00th=[ 47], 40.00th=[ 47], 50.00th=[ 47], 60.00th=[ 47], 00:38:31.804 | 70.00th=[ 48], 80.00th=[ 48], 90.00th=[ 48], 95.00th=[ 49], 00:38:31.804 | 99.00th=[ 55], 99.50th=[ 65], 99.90th=[ 93], 99.95th=[ 107], 00:38:31.804 | 99.99th=[ 108] 00:38:31.804 bw ( KiB/s): min= 1152, max= 1408, per=4.12%, avg=1347.37, stdev=78.31, samples=19 00:38:31.804 iops : min= 288, max= 352, avg=336.84, stdev=19.58, samples=19 00:38:31.804 lat (msec) : 50=98.82%, 100=1.13%, 250=0.06% 00:38:31.804 cpu : usr=96.69%, sys=2.13%, ctx=175, majf=0, minf=1636 00:38:31.804 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:31.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.804 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.804 issued rwts: total=3376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.804 filename1: (groupid=0, jobs=1): err= 0: pid=1569918: Wed Jul 10 14:40:40 2024 00:38:31.804 read: IOPS=337, BW=1350KiB/s (1383kB/s)(13.2MiB/10002msec) 00:38:31.804 slat (nsec): min=5466, max=83213, avg=35695.43, stdev=10348.02 00:38:31.804 clat (msec): min=31, max=107, avg=47.07, stdev= 3.63 00:38:31.804 lat (msec): min=31, max=107, avg=47.10, stdev= 3.63 00:38:31.804 clat percentiles (msec): 00:38:31.804 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 46], 00:38:31.804 | 30.00th=[ 47], 40.00th=[ 47], 50.00th=[ 47], 60.00th=[ 48], 00:38:31.804 | 70.00th=[ 48], 80.00th=[ 48], 90.00th=[ 49], 95.00th=[ 49], 00:38:31.804 | 99.00th=[ 55], 99.50th=[ 64], 99.90th=[ 93], 99.95th=[ 108], 00:38:31.804 | 99.99th=[ 108] 00:38:31.804 bw ( KiB/s): min= 1152, max= 1408, per=4.12%, avg=1347.37, stdev=78.31, samples=19 00:38:31.804 iops : min= 288, max= 352, avg=336.84, stdev=19.58, samples=19 00:38:31.804 lat (msec) : 50=98.93%, 100=1.01%, 250=0.06% 00:38:31.804 cpu : usr=92.05%, sys=4.03%, ctx=209, majf=0, minf=1635 00:38:31.804 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:31.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.805 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.805 issued rwts: total=3376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.805 filename1: (groupid=0, jobs=1): err= 0: pid=1569919: Wed Jul 10 14:40:40 2024 00:38:31.805 read: IOPS=338, BW=1354KiB/s (1387kB/s)(13.2MiB/10019msec) 00:38:31.805 slat (usec): min=8, max=114, avg=48.16, stdev=16.02 00:38:31.805 clat (usec): min=30632, max=76317, avg=46841.23, stdev=3173.70 00:38:31.805 lat (usec): min=30662, max=76356, avg=46889.39, stdev=3169.71 00:38:31.805 clat percentiles (usec): 00:38:31.805 | 1.00th=[33424], 5.00th=[44827], 10.00th=[44827], 20.00th=[45876], 00:38:31.805 | 30.00th=[46400], 40.00th=[46400], 50.00th=[46924], 60.00th=[46924], 00:38:31.805 | 70.00th=[47449], 80.00th=[47973], 90.00th=[47973], 95.00th=[48497], 00:38:31.805 | 99.00th=[61604], 99.50th=[62653], 99.90th=[76022], 99.95th=[76022], 00:38:31.805 | 99.99th=[76022] 00:38:31.805 bw ( KiB/s): min= 1280, max= 1424, per=4.13%, avg=1350.40, stdev=64.08, samples=20 00:38:31.805 iops : min= 320, max= 356, avg=337.60, stdev=16.02, samples=20 00:38:31.805 lat (msec) : 50=98.17%, 100=1.83% 00:38:31.805 cpu : usr=98.08%, sys=1.40%, ctx=33, majf=0, minf=1634 00:38:31.805 IO depths : 1=4.2%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.3%, 32=0.0%, >=64=0.0% 00:38:31.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.805 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.805 issued rwts: total=3392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.805 filename1: (groupid=0, jobs=1): err= 0: pid=1569920: Wed Jul 10 14:40:40 2024 00:38:31.805 read: IOPS=338, BW=1354KiB/s (1386kB/s)(13.2MiB/10024msec) 00:38:31.805 slat (nsec): min=12412, max=71886, avg=26820.06, stdev=9281.56 00:38:31.805 clat (usec): min=26993, max=90549, avg=47043.74, stdev=3555.45 00:38:31.805 lat (usec): min=27016, max=90580, avg=47070.56, stdev=3555.43 00:38:31.805 clat percentiles (usec): 00:38:31.805 | 1.00th=[40633], 5.00th=[44827], 10.00th=[45351], 20.00th=[45876], 00:38:31.805 | 30.00th=[46400], 40.00th=[46924], 50.00th=[46924], 60.00th=[47449], 00:38:31.805 | 70.00th=[47449], 80.00th=[47973], 90.00th=[48497], 95.00th=[49021], 00:38:31.805 | 99.00th=[54264], 99.50th=[67634], 99.90th=[90702], 99.95th=[90702], 00:38:31.805 | 99.99th=[90702] 00:38:31.805 bw ( KiB/s): min= 1152, max= 1408, per=4.12%, avg=1347.37, stdev=78.31, samples=19 00:38:31.805 iops : min= 288, max= 352, avg=336.84, stdev=19.58, samples=19 00:38:31.805 lat (msec) : 50=98.94%, 100=1.06% 00:38:31.805 cpu : usr=95.30%, sys=2.70%, ctx=143, majf=0, minf=1635 00:38:31.805 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:31.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.805 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.805 issued rwts: total=3392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.805 filename1: (groupid=0, jobs=1): err= 0: pid=1569921: Wed Jul 10 14:40:40 2024 00:38:31.805 read: IOPS=341, BW=1365KiB/s (1398kB/s)(13.4MiB/10031msec) 00:38:31.805 slat (usec): min=9, max=101, avg=42.42, stdev=15.71 00:38:31.805 clat (usec): min=12689, max=64168, avg=46508.35, stdev=3049.23 00:38:31.805 lat (usec): min=12729, max=64236, avg=46550.77, stdev=3047.33 00:38:31.805 clat percentiles (usec): 00:38:31.805 | 1.00th=[33424], 5.00th=[44303], 10.00th=[44827], 20.00th=[45876], 00:38:31.805 | 30.00th=[46400], 40.00th=[46400], 50.00th=[46924], 60.00th=[47449], 00:38:31.805 | 70.00th=[47449], 80.00th=[47973], 90.00th=[47973], 95.00th=[48497], 00:38:31.805 | 99.00th=[49021], 99.50th=[54264], 99.90th=[54789], 99.95th=[64226], 00:38:31.805 | 99.99th=[64226] 00:38:31.805 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1363.20, stdev=75.15, samples=20 00:38:31.805 iops : min= 320, max= 384, avg=340.80, stdev=18.79, samples=20 00:38:31.805 lat (msec) : 20=0.47%, 50=99.01%, 100=0.53% 00:38:31.805 cpu : usr=97.80%, sys=1.61%, ctx=75, majf=0, minf=1637 00:38:31.805 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:31.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.805 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.805 issued rwts: total=3424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.805 filename1: (groupid=0, jobs=1): err= 0: pid=1569922: Wed Jul 10 14:40:40 2024 00:38:31.805 read: IOPS=364, BW=1459KiB/s (1494kB/s)(14.3MiB/10014msec) 00:38:31.805 slat (usec): min=12, max=857, avg=30.73, stdev=19.50 00:38:31.805 clat (msec): min=19, max=109, avg=43.64, stdev= 8.60 00:38:31.805 lat (msec): min=19, max=109, avg=43.67, stdev= 8.60 00:38:31.805 clat percentiles (msec): 00:38:31.805 | 1.00th=[ 28], 5.00th=[ 29], 10.00th=[ 33], 20.00th=[ 35], 00:38:31.805 | 30.00th=[ 43], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 47], 00:38:31.805 | 70.00th=[ 47], 80.00th=[ 48], 90.00th=[ 49], 95.00th=[ 54], 00:38:31.805 | 99.00th=[ 68], 99.50th=[ 74], 99.90th=[ 110], 99.95th=[ 110], 00:38:31.805 | 99.99th=[ 110] 00:38:31.805 bw ( KiB/s): min= 1154, max= 1680, per=4.46%, avg=1461.16, stdev=146.80, samples=19 00:38:31.805 iops : min= 288, max= 420, avg=365.26, stdev=36.76, samples=19 00:38:31.805 lat (msec) : 20=0.05%, 50=93.32%, 100=6.19%, 250=0.44% 00:38:31.805 cpu : usr=94.41%, sys=3.00%, ctx=154, majf=0, minf=1637 00:38:31.805 IO depths : 1=1.4%, 2=4.4%, 4=14.1%, 8=68.0%, 16=12.2%, 32=0.0%, >=64=0.0% 00:38:31.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.805 complete : 0=0.0%, 4=91.3%, 8=4.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.805 issued rwts: total=3652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.805 filename1: (groupid=0, jobs=1): err= 0: pid=1569923: Wed Jul 10 14:40:40 2024 00:38:31.805 read: IOPS=338, BW=1354KiB/s (1387kB/s)(13.2MiB/10020msec) 00:38:31.805 slat (usec): min=12, max=112, avg=38.92, stdev=14.96 00:38:31.805 clat (usec): min=27118, max=84821, avg=46891.53, stdev=3191.55 00:38:31.805 lat (usec): min=27141, max=84848, avg=46930.45, stdev=3187.80 00:38:31.805 clat percentiles (usec): 00:38:31.805 | 1.00th=[40633], 5.00th=[44303], 10.00th=[44827], 20.00th=[45351], 00:38:31.805 | 30.00th=[46400], 40.00th=[46400], 50.00th=[46924], 60.00th=[47449], 00:38:31.805 | 70.00th=[47449], 80.00th=[47973], 90.00th=[48497], 95.00th=[48497], 00:38:31.805 | 99.00th=[53216], 99.50th=[67634], 99.90th=[84411], 99.95th=[84411], 00:38:31.805 | 99.99th=[84411] 00:38:31.805 bw ( KiB/s): min= 1280, max= 1408, per=4.12%, avg=1347.37, stdev=65.66, samples=19 00:38:31.805 iops : min= 320, max= 352, avg=336.84, stdev=16.42, samples=19 00:38:31.805 lat (msec) : 50=99.00%, 100=1.00% 00:38:31.805 cpu : usr=94.96%, sys=2.68%, ctx=107, majf=0, minf=1634 00:38:31.805 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:31.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.805 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.805 issued rwts: total=3392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.805 filename1: (groupid=0, jobs=1): err= 0: pid=1569924: Wed Jul 10 14:40:40 2024 00:38:31.805 read: IOPS=338, BW=1354KiB/s (1387kB/s)(13.2MiB/10017msec) 00:38:31.805 slat (nsec): min=7948, max=92472, avg=30289.19, stdev=10663.77 00:38:31.805 clat (msec): min=24, max=102, avg=46.95, stdev= 4.50 00:38:31.805 lat (msec): min=24, max=102, avg=46.98, stdev= 4.49 00:38:31.805 clat percentiles (msec): 00:38:31.805 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 46], 00:38:31.805 | 30.00th=[ 47], 40.00th=[ 47], 50.00th=[ 47], 60.00th=[ 48], 00:38:31.805 | 70.00th=[ 48], 80.00th=[ 48], 90.00th=[ 49], 95.00th=[ 49], 00:38:31.805 | 99.00th=[ 50], 99.50th=[ 57], 99.90th=[ 103], 99.95th=[ 103], 00:38:31.805 | 99.99th=[ 103] 00:38:31.805 bw ( KiB/s): min= 1152, max= 1408, per=4.12%, avg=1347.37, stdev=78.31, samples=19 00:38:31.805 iops : min= 288, max= 352, avg=336.84, stdev=19.58, samples=19 00:38:31.805 lat (msec) : 50=99.06%, 100=0.47%, 250=0.47% 00:38:31.805 cpu : usr=97.38%, sys=1.76%, ctx=20, majf=0, minf=1636 00:38:31.805 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:31.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.805 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.805 issued rwts: total=3392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.805 filename2: (groupid=0, jobs=1): err= 0: pid=1569925: Wed Jul 10 14:40:40 2024 00:38:31.805 read: IOPS=341, BW=1365KiB/s (1398kB/s)(13.4MiB/10031msec) 00:38:31.805 slat (nsec): min=6546, max=87586, avg=31189.76, stdev=10520.18 00:38:31.805 clat (usec): min=19773, max=62394, avg=46596.56, stdev=3032.82 00:38:31.805 lat (usec): min=19785, max=62434, avg=46627.75, stdev=3032.60 00:38:31.805 clat percentiles (usec): 00:38:31.805 | 1.00th=[32375], 5.00th=[44827], 10.00th=[44827], 20.00th=[45876], 00:38:31.805 | 30.00th=[46400], 40.00th=[46924], 50.00th=[46924], 60.00th=[47449], 00:38:31.805 | 70.00th=[47449], 80.00th=[47973], 90.00th=[47973], 95.00th=[48497], 00:38:31.805 | 99.00th=[49546], 99.50th=[54264], 99.90th=[54789], 99.95th=[62129], 00:38:31.805 | 99.99th=[62653] 00:38:31.805 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1363.20, stdev=75.15, samples=20 00:38:31.805 iops : min= 320, max= 384, avg=340.80, stdev=18.79, samples=20 00:38:31.805 lat (msec) : 20=0.47%, 50=98.95%, 100=0.58% 00:38:31.805 cpu : usr=97.82%, sys=1.70%, ctx=19, majf=0, minf=1634 00:38:31.805 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:31.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.805 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.805 issued rwts: total=3424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.805 filename2: (groupid=0, jobs=1): err= 0: pid=1569926: Wed Jul 10 14:40:40 2024 00:38:31.805 read: IOPS=338, BW=1354KiB/s (1387kB/s)(13.2MiB/10019msec) 00:38:31.805 slat (nsec): min=12979, max=96675, avg=40787.45, stdev=16584.67 00:38:31.805 clat (msec): min=23, max=106, avg=46.91, stdev= 3.90 00:38:31.805 lat (msec): min=23, max=106, avg=46.95, stdev= 3.89 00:38:31.805 clat percentiles (msec): 00:38:31.805 | 1.00th=[ 41], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 46], 00:38:31.805 | 30.00th=[ 47], 40.00th=[ 47], 50.00th=[ 47], 60.00th=[ 48], 00:38:31.805 | 70.00th=[ 48], 80.00th=[ 48], 90.00th=[ 49], 95.00th=[ 49], 00:38:31.805 | 99.00th=[ 55], 99.50th=[ 73], 99.90th=[ 87], 99.95th=[ 107], 00:38:31.805 | 99.99th=[ 107] 00:38:31.805 bw ( KiB/s): min= 1280, max= 1408, per=4.12%, avg=1347.37, stdev=64.13, samples=19 00:38:31.805 iops : min= 320, max= 352, avg=336.84, stdev=16.03, samples=19 00:38:31.805 lat (msec) : 50=98.64%, 100=1.30%, 250=0.06% 00:38:31.805 cpu : usr=98.18%, sys=1.33%, ctx=15, majf=0, minf=1636 00:38:31.805 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:31.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.805 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.806 issued rwts: total=3392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.806 filename2: (groupid=0, jobs=1): err= 0: pid=1569927: Wed Jul 10 14:40:40 2024 00:38:31.806 read: IOPS=357, BW=1432KiB/s (1466kB/s)(14.0MiB/10001msec) 00:38:31.806 slat (nsec): min=12299, max=89057, avg=25944.61, stdev=10987.36 00:38:31.806 clat (msec): min=18, max=109, avg=44.51, stdev= 7.90 00:38:31.806 lat (msec): min=18, max=109, avg=44.54, stdev= 7.90 00:38:31.806 clat percentiles (msec): 00:38:31.806 | 1.00th=[ 28], 5.00th=[ 30], 10.00th=[ 34], 20.00th=[ 40], 00:38:31.806 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:38:31.806 | 70.00th=[ 48], 80.00th=[ 48], 90.00th=[ 49], 95.00th=[ 50], 00:38:31.806 | 99.00th=[ 63], 99.50th=[ 70], 99.90th=[ 110], 99.95th=[ 110], 00:38:31.806 | 99.99th=[ 110] 00:38:31.806 bw ( KiB/s): min= 1202, max= 1680, per=4.38%, avg=1433.37, stdev=123.19, samples=19 00:38:31.806 iops : min= 300, max= 420, avg=358.32, stdev=30.85, samples=19 00:38:31.806 lat (msec) : 20=0.28%, 50=94.75%, 100=4.53%, 250=0.45% 00:38:31.806 cpu : usr=97.48%, sys=1.77%, ctx=99, majf=0, minf=1636 00:38:31.806 IO depths : 1=2.6%, 2=5.2%, 4=12.2%, 8=68.1%, 16=11.9%, 32=0.0%, >=64=0.0% 00:38:31.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.806 complete : 0=0.0%, 4=91.1%, 8=5.1%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.806 issued rwts: total=3580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.806 filename2: (groupid=0, jobs=1): err= 0: pid=1569928: Wed Jul 10 14:40:40 2024 00:38:31.806 read: IOPS=341, BW=1365KiB/s (1398kB/s)(13.4MiB/10033msec) 00:38:31.806 slat (usec): min=14, max=109, avg=38.88, stdev=13.49 00:38:31.806 clat (usec): min=20632, max=64101, avg=46545.40, stdev=2918.68 00:38:31.806 lat (usec): min=20665, max=64160, avg=46584.28, stdev=2917.05 00:38:31.806 clat percentiles (usec): 00:38:31.806 | 1.00th=[31065], 5.00th=[44303], 10.00th=[44827], 20.00th=[45876], 00:38:31.806 | 30.00th=[46400], 40.00th=[46924], 50.00th=[46924], 60.00th=[47449], 00:38:31.806 | 70.00th=[47449], 80.00th=[47973], 90.00th=[47973], 95.00th=[48497], 00:38:31.806 | 99.00th=[49021], 99.50th=[54264], 99.90th=[54789], 99.95th=[64226], 00:38:31.806 | 99.99th=[64226] 00:38:31.806 bw ( KiB/s): min= 1280, max= 1532, per=4.16%, avg=1363.00, stdev=74.67, samples=20 00:38:31.806 iops : min= 320, max= 383, avg=340.75, stdev=18.67, samples=20 00:38:31.806 lat (msec) : 50=99.42%, 100=0.58% 00:38:31.806 cpu : usr=94.32%, sys=2.91%, ctx=88, majf=0, minf=1637 00:38:31.806 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:31.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.806 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.806 issued rwts: total=3424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.806 filename2: (groupid=0, jobs=1): err= 0: pid=1569929: Wed Jul 10 14:40:40 2024 00:38:31.806 read: IOPS=338, BW=1354KiB/s (1386kB/s)(13.2MiB/10018msec) 00:38:31.806 slat (nsec): min=11990, max=89920, avg=28703.36, stdev=7430.84 00:38:31.806 clat (msec): min=25, max=102, avg=47.01, stdev= 4.40 00:38:31.806 lat (msec): min=25, max=102, avg=47.04, stdev= 4.40 00:38:31.806 clat percentiles (msec): 00:38:31.806 | 1.00th=[ 41], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 46], 00:38:31.806 | 30.00th=[ 47], 40.00th=[ 47], 50.00th=[ 47], 60.00th=[ 48], 00:38:31.806 | 70.00th=[ 48], 80.00th=[ 48], 90.00th=[ 49], 95.00th=[ 49], 00:38:31.806 | 99.00th=[ 57], 99.50th=[ 59], 99.90th=[ 103], 99.95th=[ 103], 00:38:31.806 | 99.99th=[ 103] 00:38:31.806 bw ( KiB/s): min= 1154, max= 1408, per=4.12%, avg=1347.47, stdev=78.03, samples=19 00:38:31.806 iops : min= 288, max= 352, avg=336.84, stdev=19.58, samples=19 00:38:31.806 lat (msec) : 50=98.94%, 100=0.59%, 250=0.47% 00:38:31.806 cpu : usr=97.74%, sys=1.79%, ctx=18, majf=0, minf=1634 00:38:31.806 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:31.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.806 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.806 issued rwts: total=3390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.806 filename2: (groupid=0, jobs=1): err= 0: pid=1569930: Wed Jul 10 14:40:40 2024 00:38:31.806 read: IOPS=354, BW=1420KiB/s (1454kB/s)(13.9MiB/10054msec) 00:38:31.806 slat (usec): min=12, max=131, avg=41.31, stdev=18.38 00:38:31.806 clat (msec): min=19, max=101, avg=44.73, stdev= 7.80 00:38:31.806 lat (msec): min=19, max=101, avg=44.77, stdev= 7.80 00:38:31.806 clat percentiles (msec): 00:38:31.806 | 1.00th=[ 27], 5.00th=[ 30], 10.00th=[ 34], 20.00th=[ 40], 00:38:31.806 | 30.00th=[ 45], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:38:31.806 | 70.00th=[ 48], 80.00th=[ 48], 90.00th=[ 49], 95.00th=[ 54], 00:38:31.806 | 99.00th=[ 67], 99.50th=[ 75], 99.90th=[ 102], 99.95th=[ 102], 00:38:31.806 | 99.99th=[ 102] 00:38:31.806 bw ( KiB/s): min= 1152, max= 1632, per=4.31%, avg=1411.37, stdev=124.46, samples=19 00:38:31.806 iops : min= 288, max= 408, avg=352.84, stdev=31.11, samples=19 00:38:31.806 lat (msec) : 20=0.11%, 50=93.53%, 100=5.91%, 250=0.45% 00:38:31.806 cpu : usr=98.12%, sys=1.35%, ctx=19, majf=0, minf=1634 00:38:31.806 IO depths : 1=2.6%, 2=6.4%, 4=16.9%, 8=63.4%, 16=10.7%, 32=0.0%, >=64=0.0% 00:38:31.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.806 complete : 0=0.0%, 4=92.0%, 8=3.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.806 issued rwts: total=3568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.806 filename2: (groupid=0, jobs=1): err= 0: pid=1569931: Wed Jul 10 14:40:40 2024 00:38:31.806 read: IOPS=337, BW=1349KiB/s (1381kB/s)(13.2MiB/10011msec) 00:38:31.806 slat (usec): min=12, max=107, avg=51.80, stdev=17.91 00:38:31.806 clat (usec): min=29531, max=95795, avg=47023.56, stdev=3906.77 00:38:31.806 lat (usec): min=29573, max=95841, avg=47075.36, stdev=3905.58 00:38:31.806 clat percentiles (usec): 00:38:31.806 | 1.00th=[44303], 5.00th=[44827], 10.00th=[44827], 20.00th=[45876], 00:38:31.806 | 30.00th=[46400], 40.00th=[46400], 50.00th=[46924], 60.00th=[46924], 00:38:31.806 | 70.00th=[47449], 80.00th=[47973], 90.00th=[47973], 95.00th=[48497], 00:38:31.806 | 99.00th=[58983], 99.50th=[69731], 99.90th=[95945], 99.95th=[95945], 00:38:31.806 | 99.99th=[95945] 00:38:31.806 bw ( KiB/s): min= 1152, max= 1408, per=4.12%, avg=1347.37, stdev=71.67, samples=19 00:38:31.806 iops : min= 288, max= 352, avg=336.84, stdev=17.92, samples=19 00:38:31.806 lat (msec) : 50=98.52%, 100=1.48% 00:38:31.806 cpu : usr=97.11%, sys=1.84%, ctx=93, majf=0, minf=1636 00:38:31.806 IO depths : 1=0.5%, 2=6.8%, 4=25.0%, 8=55.7%, 16=12.0%, 32=0.0%, >=64=0.0% 00:38:31.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.806 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.806 issued rwts: total=3376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.806 filename2: (groupid=0, jobs=1): err= 0: pid=1569932: Wed Jul 10 14:40:40 2024 00:38:31.806 read: IOPS=344, BW=1376KiB/s (1409kB/s)(13.5MiB/10045msec) 00:38:31.806 slat (usec): min=11, max=124, avg=22.10, stdev=10.78 00:38:31.806 clat (usec): min=3591, max=59670, avg=46262.86, stdev=4975.27 00:38:31.806 lat (usec): min=3613, max=59700, avg=46284.96, stdev=4973.69 00:38:31.806 clat percentiles (usec): 00:38:31.806 | 1.00th=[14615], 5.00th=[44827], 10.00th=[44827], 20.00th=[45876], 00:38:31.806 | 30.00th=[46400], 40.00th=[46924], 50.00th=[46924], 60.00th=[47449], 00:38:31.806 | 70.00th=[47449], 80.00th=[47973], 90.00th=[48497], 95.00th=[48497], 00:38:31.806 | 99.00th=[49546], 99.50th=[57410], 99.90th=[58459], 99.95th=[59507], 00:38:31.806 | 99.99th=[59507] 00:38:31.806 bw ( KiB/s): min= 1280, max= 1664, per=4.20%, avg=1376.00, stdev=89.61, samples=20 00:38:31.806 iops : min= 320, max= 416, avg=344.00, stdev=22.40, samples=20 00:38:31.806 lat (msec) : 4=0.26%, 10=0.20%, 20=0.98%, 50=97.69%, 100=0.87% 00:38:31.806 cpu : usr=97.84%, sys=1.67%, ctx=23, majf=0, minf=1637 00:38:31.806 IO depths : 1=4.0%, 2=10.2%, 4=24.7%, 8=52.7%, 16=8.5%, 32=0.0%, >=64=0.0% 00:38:31.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.806 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.806 issued rwts: total=3456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:31.806 00:38:31.806 Run status group 0 (all jobs): 00:38:31.806 READ: bw=32.0MiB/s (33.5MB/s), 1349KiB/s-1459KiB/s (1381kB/s-1494kB/s), io=321MiB (337MB), run=10001-10054msec 00:38:31.806 ----------------------------------------------------- 00:38:31.806 Suppressions used: 00:38:31.806 count bytes template 00:38:31.806 45 402 /usr/src/fio/parse.c 00:38:31.806 1 8 libtcmalloc_minimal.so 00:38:31.806 1 904 libcrypto.so 00:38:31.806 ----------------------------------------------------- 00:38:31.806 00:38:31.806 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:38:31.806 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:31.806 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:31.806 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:31.806 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:31.806 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:31.806 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:31.806 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.065 bdev_null0 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.065 [2024-07-10 14:40:41.357952] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.065 bdev_null1 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:32.065 14:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:32.065 { 00:38:32.065 "params": { 00:38:32.065 "name": "Nvme$subsystem", 00:38:32.065 "trtype": "$TEST_TRANSPORT", 00:38:32.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:32.066 "adrfam": "ipv4", 00:38:32.066 "trsvcid": "$NVMF_PORT", 00:38:32.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:32.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:32.066 "hdgst": ${hdgst:-false}, 00:38:32.066 "ddgst": ${ddgst:-false} 00:38:32.066 }, 00:38:32.066 "method": "bdev_nvme_attach_controller" 00:38:32.066 } 00:38:32.066 EOF 00:38:32.066 )") 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:32.066 { 00:38:32.066 "params": { 00:38:32.066 "name": "Nvme$subsystem", 00:38:32.066 "trtype": "$TEST_TRANSPORT", 00:38:32.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:32.066 "adrfam": "ipv4", 00:38:32.066 "trsvcid": "$NVMF_PORT", 00:38:32.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:32.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:32.066 "hdgst": ${hdgst:-false}, 00:38:32.066 "ddgst": ${ddgst:-false} 00:38:32.066 }, 00:38:32.066 "method": "bdev_nvme_attach_controller" 00:38:32.066 } 00:38:32.066 EOF 00:38:32.066 )") 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:32.066 "params": { 00:38:32.066 "name": "Nvme0", 00:38:32.066 "trtype": "tcp", 00:38:32.066 "traddr": "10.0.0.2", 00:38:32.066 "adrfam": "ipv4", 00:38:32.066 "trsvcid": "4420", 00:38:32.066 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:32.066 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:32.066 "hdgst": false, 00:38:32.066 "ddgst": false 00:38:32.066 }, 00:38:32.066 "method": "bdev_nvme_attach_controller" 00:38:32.066 },{ 00:38:32.066 "params": { 00:38:32.066 "name": "Nvme1", 00:38:32.066 "trtype": "tcp", 00:38:32.066 "traddr": "10.0.0.2", 00:38:32.066 "adrfam": "ipv4", 00:38:32.066 "trsvcid": "4420", 00:38:32.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:32.066 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:32.066 "hdgst": false, 00:38:32.066 "ddgst": false 00:38:32.066 }, 00:38:32.066 "method": "bdev_nvme_attach_controller" 00:38:32.066 }' 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:32.066 14:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:32.324 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:32.324 ... 00:38:32.324 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:32.324 ... 00:38:32.324 fio-3.35 00:38:32.324 Starting 4 threads 00:38:32.324 EAL: No free 2048 kB hugepages reported on node 1 00:38:38.880 00:38:38.880 filename0: (groupid=0, jobs=1): err= 0: pid=1571388: Wed Jul 10 14:40:47 2024 00:38:38.880 read: IOPS=1245, BW=9965KiB/s (10.2MB/s)(48.7MiB/5003msec) 00:38:38.880 slat (nsec): min=6426, max=53306, avg=16351.53, stdev=5045.51 00:38:38.880 clat (usec): min=1785, max=14733, avg=6361.11, stdev=1130.83 00:38:38.880 lat (usec): min=1803, max=14755, avg=6377.47, stdev=1130.69 00:38:38.880 clat percentiles (usec): 00:38:38.880 | 1.00th=[ 3556], 5.00th=[ 4686], 10.00th=[ 5145], 20.00th=[ 5538], 00:38:38.880 | 30.00th=[ 5866], 40.00th=[ 6128], 50.00th=[ 6325], 60.00th=[ 6521], 00:38:38.880 | 70.00th=[ 6783], 80.00th=[ 6980], 90.00th=[ 7570], 95.00th=[ 8160], 00:38:38.880 | 99.00th=[ 9634], 99.50th=[10552], 99.90th=[14222], 99.95th=[14615], 00:38:38.880 | 99.99th=[14746] 00:38:38.880 bw ( KiB/s): min= 9058, max=10368, per=25.45%, avg=9960.20, stdev=358.10, samples=10 00:38:38.880 iops : min= 1132, max= 1296, avg=1245.00, stdev=44.83, samples=10 00:38:38.880 lat (msec) : 2=0.03%, 4=2.12%, 10=97.06%, 20=0.79% 00:38:38.880 cpu : usr=92.32%, sys=7.12%, ctx=9, majf=0, minf=1637 00:38:38.880 IO depths : 1=0.7%, 2=17.4%, 4=56.6%, 8=25.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:38.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.880 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.880 issued rwts: total=6232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.880 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:38.880 filename0: (groupid=0, jobs=1): err= 0: pid=1571389: Wed Jul 10 14:40:47 2024 00:38:38.880 read: IOPS=1023, BW=8189KiB/s (8385kB/s)(40.0MiB/5001msec) 00:38:38.880 slat (nsec): min=6866, max=52730, avg=15558.11, stdev=5330.52 00:38:38.880 clat (usec): min=1324, max=14931, avg=7764.71, stdev=1727.52 00:38:38.880 lat (usec): min=1343, max=14942, avg=7780.26, stdev=1727.02 00:38:38.880 clat percentiles (usec): 00:38:38.880 | 1.00th=[ 4948], 5.00th=[ 5800], 10.00th=[ 6128], 20.00th=[ 6390], 00:38:38.880 | 30.00th=[ 6718], 40.00th=[ 6980], 50.00th=[ 7242], 60.00th=[ 7701], 00:38:38.880 | 70.00th=[ 8225], 80.00th=[ 9110], 90.00th=[10290], 95.00th=[11338], 00:38:38.880 | 99.00th=[12911], 99.50th=[13304], 99.90th=[14091], 99.95th=[14353], 00:38:38.880 | 99.99th=[14877] 00:38:38.880 bw ( KiB/s): min= 7408, max= 8944, per=20.92%, avg=8188.33, stdev=471.14, samples=9 00:38:38.880 iops : min= 926, max= 1118, avg=1023.44, stdev=58.98, samples=9 00:38:38.880 lat (msec) : 2=0.12%, 4=0.33%, 10=87.17%, 20=12.39% 00:38:38.880 cpu : usr=93.40%, sys=6.02%, ctx=16, majf=0, minf=1637 00:38:38.880 IO depths : 1=0.2%, 2=8.7%, 4=63.4%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:38.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.880 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.880 issued rwts: total=5119,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.880 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:38.880 filename1: (groupid=0, jobs=1): err= 0: pid=1571390: Wed Jul 10 14:40:47 2024 00:38:38.880 read: IOPS=1258, BW=9.83MiB/s (10.3MB/s)(49.2MiB/5005msec) 00:38:38.880 slat (nsec): min=6843, max=53417, avg=16052.92, stdev=4919.85 00:38:38.880 clat (usec): min=1890, max=15925, avg=6301.23, stdev=1179.85 00:38:38.880 lat (usec): min=1909, max=15968, avg=6317.28, stdev=1179.77 00:38:38.880 clat percentiles (usec): 00:38:38.880 | 1.00th=[ 3425], 5.00th=[ 4424], 10.00th=[ 4948], 20.00th=[ 5473], 00:38:38.880 | 30.00th=[ 5800], 40.00th=[ 6063], 50.00th=[ 6325], 60.00th=[ 6521], 00:38:38.880 | 70.00th=[ 6783], 80.00th=[ 6980], 90.00th=[ 7504], 95.00th=[ 8225], 00:38:38.880 | 99.00th=[ 9634], 99.50th=[10290], 99.90th=[15795], 99.95th=[15926], 00:38:38.880 | 99.99th=[15926] 00:38:38.880 bw ( KiB/s): min= 9536, max=10704, per=25.71%, avg=10064.00, stdev=391.85, samples=10 00:38:38.880 iops : min= 1192, max= 1338, avg=1258.00, stdev=48.98, samples=10 00:38:38.880 lat (msec) : 2=0.02%, 4=2.54%, 10=96.78%, 20=0.67% 00:38:38.880 cpu : usr=92.41%, sys=7.01%, ctx=12, majf=0, minf=1640 00:38:38.880 IO depths : 1=0.6%, 2=12.2%, 4=61.0%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:38.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.880 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.880 issued rwts: total=6298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.880 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:38.880 filename1: (groupid=0, jobs=1): err= 0: pid=1571391: Wed Jul 10 14:40:47 2024 00:38:38.880 read: IOPS=1366, BW=10.7MiB/s (11.2MB/s)(53.4MiB/5003msec) 00:38:38.880 slat (nsec): min=6926, max=76481, avg=15917.07, stdev=5218.39 00:38:38.880 clat (usec): min=2416, max=17657, avg=5795.86, stdev=1220.20 00:38:38.880 lat (usec): min=2427, max=17679, avg=5811.78, stdev=1220.07 00:38:38.880 clat percentiles (usec): 00:38:38.880 | 1.00th=[ 2835], 5.00th=[ 3523], 10.00th=[ 4228], 20.00th=[ 4883], 00:38:38.880 | 30.00th=[ 5342], 40.00th=[ 5604], 50.00th=[ 5866], 60.00th=[ 6128], 00:38:38.880 | 70.00th=[ 6325], 80.00th=[ 6652], 90.00th=[ 7046], 95.00th=[ 7635], 00:38:38.880 | 99.00th=[ 8979], 99.50th=[ 9503], 99.90th=[13698], 99.95th=[13698], 00:38:38.880 | 99.99th=[17695] 00:38:38.880 bw ( KiB/s): min=10112, max=11920, per=28.01%, avg=10963.56, stdev=617.80, samples=9 00:38:38.880 iops : min= 1264, max= 1490, avg=1370.44, stdev=77.23, samples=9 00:38:38.880 lat (msec) : 4=8.18%, 10=91.56%, 20=0.26% 00:38:38.880 cpu : usr=92.00%, sys=6.96%, ctx=15, majf=0, minf=1635 00:38:38.880 IO depths : 1=1.1%, 2=12.8%, 4=61.3%, 8=24.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:38.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.880 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.880 issued rwts: total=6836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.880 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:38.880 00:38:38.880 Run status group 0 (all jobs): 00:38:38.880 READ: bw=38.2MiB/s (40.1MB/s), 8189KiB/s-10.7MiB/s (8385kB/s-11.2MB/s), io=191MiB (201MB), run=5001-5005msec 00:38:39.446 ----------------------------------------------------- 00:38:39.446 Suppressions used: 00:38:39.446 count bytes template 00:38:39.446 6 52 /usr/src/fio/parse.c 00:38:39.446 1 8 libtcmalloc_minimal.so 00:38:39.446 1 904 libcrypto.so 00:38:39.446 ----------------------------------------------------- 00:38:39.446 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.446 00:38:39.446 real 0m27.799s 00:38:39.446 user 4m33.833s 00:38:39.446 sys 0m8.641s 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:39.446 14:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:39.446 ************************************ 00:38:39.446 END TEST fio_dif_rand_params 00:38:39.446 ************************************ 00:38:39.446 14:40:48 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:38:39.446 14:40:48 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:38:39.446 14:40:48 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:39.446 14:40:48 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:39.446 14:40:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:39.446 ************************************ 00:38:39.446 START TEST fio_dif_digest 00:38:39.446 ************************************ 00:38:39.446 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:38:39.446 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:38:39.446 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:38:39.446 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:38:39.446 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:38:39.446 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:38:39.446 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:38:39.446 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:38:39.446 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:38:39.446 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:38:39.446 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:38:39.446 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:38:39.446 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:38:39.446 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:38:39.446 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:38:39.446 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:38:39.446 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:39.446 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.446 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:39.446 bdev_null0 00:38:39.446 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:39.447 [2024-07-10 14:40:48.918536] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:39.447 { 00:38:39.447 "params": { 00:38:39.447 "name": "Nvme$subsystem", 00:38:39.447 "trtype": "$TEST_TRANSPORT", 00:38:39.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:39.447 "adrfam": "ipv4", 00:38:39.447 "trsvcid": "$NVMF_PORT", 00:38:39.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:39.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:39.447 "hdgst": ${hdgst:-false}, 00:38:39.447 "ddgst": ${ddgst:-false} 00:38:39.447 }, 00:38:39.447 "method": "bdev_nvme_attach_controller" 00:38:39.447 } 00:38:39.447 EOF 00:38:39.447 )") 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:39.447 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:38:39.705 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:39.705 14:40:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:38:39.705 14:40:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:38:39.705 14:40:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:39.705 "params": { 00:38:39.705 "name": "Nvme0", 00:38:39.705 "trtype": "tcp", 00:38:39.705 "traddr": "10.0.0.2", 00:38:39.705 "adrfam": "ipv4", 00:38:39.705 "trsvcid": "4420", 00:38:39.705 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:39.705 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:39.705 "hdgst": true, 00:38:39.705 "ddgst": true 00:38:39.705 }, 00:38:39.705 "method": "bdev_nvme_attach_controller" 00:38:39.705 }' 00:38:39.705 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:39.705 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:39.705 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:38:39.705 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:39.705 14:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:39.963 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:39.963 ... 00:38:39.963 fio-3.35 00:38:39.963 Starting 3 threads 00:38:39.963 EAL: No free 2048 kB hugepages reported on node 1 00:38:52.162 00:38:52.162 filename0: (groupid=0, jobs=1): err= 0: pid=1572314: Wed Jul 10 14:41:00 2024 00:38:52.162 read: IOPS=169, BW=21.2MiB/s (22.2MB/s)(213MiB/10047msec) 00:38:52.162 slat (nsec): min=6474, max=60388, avg=28972.27, stdev=7464.24 00:38:52.163 clat (usec): min=10410, max=61941, avg=17627.39, stdev=4011.93 00:38:52.163 lat (usec): min=10440, max=61976, avg=17656.36, stdev=4012.06 00:38:52.163 clat percentiles (usec): 00:38:52.163 | 1.00th=[12125], 5.00th=[14877], 10.00th=[15664], 20.00th=[16319], 00:38:52.163 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:38:52.163 | 70.00th=[17957], 80.00th=[18482], 90.00th=[19006], 95.00th=[19530], 00:38:52.163 | 99.00th=[21103], 99.50th=[57934], 99.90th=[60556], 99.95th=[62129], 00:38:52.163 | 99.99th=[62129] 00:38:52.163 bw ( KiB/s): min=17664, max=23552, per=32.87%, avg=21785.60, stdev=1242.81, samples=20 00:38:52.163 iops : min= 138, max= 184, avg=170.20, stdev= 9.71, samples=20 00:38:52.163 lat (msec) : 20=97.18%, 50=2.00%, 100=0.82% 00:38:52.163 cpu : usr=86.11%, sys=10.36%, ctx=593, majf=0, minf=1637 00:38:52.163 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:52.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.163 issued rwts: total=1704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:52.163 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:52.163 filename0: (groupid=0, jobs=1): err= 0: pid=1572315: Wed Jul 10 14:41:00 2024 00:38:52.163 read: IOPS=173, BW=21.7MiB/s (22.8MB/s)(218MiB/10045msec) 00:38:52.163 slat (nsec): min=6249, max=39989, avg=21519.91, stdev=2563.64 00:38:52.163 clat (usec): min=9695, max=59306, avg=17217.23, stdev=2749.38 00:38:52.163 lat (usec): min=9715, max=59328, avg=17238.75, stdev=2749.28 00:38:52.163 clat percentiles (usec): 00:38:52.163 | 1.00th=[11207], 5.00th=[13829], 10.00th=[15270], 20.00th=[16188], 00:38:52.163 | 30.00th=[16581], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:38:52.163 | 70.00th=[17957], 80.00th=[18220], 90.00th=[19006], 95.00th=[19530], 00:38:52.163 | 99.00th=[20841], 99.50th=[23987], 99.90th=[58983], 99.95th=[59507], 00:38:52.163 | 99.99th=[59507] 00:38:52.163 bw ( KiB/s): min=20224, max=24320, per=33.66%, avg=22310.40, stdev=966.02, samples=20 00:38:52.163 iops : min= 158, max= 190, avg=174.30, stdev= 7.55, samples=20 00:38:52.163 lat (msec) : 10=0.06%, 20=97.36%, 50=2.35%, 100=0.23% 00:38:52.163 cpu : usr=92.03%, sys=7.38%, ctx=18, majf=0, minf=1634 00:38:52.163 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:52.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.163 issued rwts: total=1745,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:52.163 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:52.163 filename0: (groupid=0, jobs=1): err= 0: pid=1572316: Wed Jul 10 14:41:00 2024 00:38:52.163 read: IOPS=174, BW=21.8MiB/s (22.9MB/s)(219MiB/10047msec) 00:38:52.163 slat (nsec): min=6105, max=43364, avg=20706.67, stdev=2657.76 00:38:52.163 clat (usec): min=10471, max=60720, avg=17142.88, stdev=4235.38 00:38:52.163 lat (usec): min=10491, max=60740, avg=17163.59, stdev=4235.38 00:38:52.163 clat percentiles (usec): 00:38:52.163 | 1.00th=[11863], 5.00th=[14353], 10.00th=[15139], 20.00th=[15795], 00:38:52.163 | 30.00th=[16319], 40.00th=[16581], 50.00th=[16909], 60.00th=[17171], 00:38:52.163 | 70.00th=[17433], 80.00th=[17957], 90.00th=[18482], 95.00th=[19006], 00:38:52.163 | 99.00th=[21627], 99.50th=[57934], 99.90th=[60031], 99.95th=[60556], 00:38:52.163 | 99.99th=[60556] 00:38:52.163 bw ( KiB/s): min=18432, max=24320, per=33.82%, avg=22415.00, stdev=1300.64, samples=20 00:38:52.163 iops : min= 144, max= 190, avg=175.10, stdev=10.17, samples=20 00:38:52.163 lat (msec) : 20=97.55%, 50=1.54%, 100=0.91% 00:38:52.163 cpu : usr=92.78%, sys=6.61%, ctx=22, majf=0, minf=1640 00:38:52.163 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:52.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.163 issued rwts: total=1753,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:52.163 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:52.163 00:38:52.163 Run status group 0 (all jobs): 00:38:52.163 READ: bw=64.7MiB/s (67.9MB/s), 21.2MiB/s-21.8MiB/s (22.2MB/s-22.9MB/s), io=650MiB (682MB), run=10045-10047msec 00:38:52.163 ----------------------------------------------------- 00:38:52.163 Suppressions used: 00:38:52.163 count bytes template 00:38:52.163 5 44 /usr/src/fio/parse.c 00:38:52.163 1 8 libtcmalloc_minimal.so 00:38:52.163 1 904 libcrypto.so 00:38:52.163 ----------------------------------------------------- 00:38:52.163 00:38:52.163 14:41:01 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:38:52.163 14:41:01 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:38:52.163 14:41:01 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:38:52.163 14:41:01 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:52.163 14:41:01 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:38:52.163 14:41:01 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:52.163 14:41:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.163 14:41:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:52.163 14:41:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.163 14:41:01 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:52.163 14:41:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.163 14:41:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:52.163 14:41:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.163 00:38:52.163 real 0m12.269s 00:38:52.163 user 0m29.475s 00:38:52.163 sys 0m2.884s 00:38:52.163 14:41:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:52.163 14:41:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:52.163 ************************************ 00:38:52.163 END TEST fio_dif_digest 00:38:52.163 ************************************ 00:38:52.163 14:41:01 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:38:52.163 14:41:01 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:52.163 14:41:01 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:38:52.163 14:41:01 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:52.163 14:41:01 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:38:52.163 14:41:01 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:52.163 14:41:01 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:38:52.163 14:41:01 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:52.163 14:41:01 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:52.163 rmmod nvme_tcp 00:38:52.163 rmmod nvme_fabrics 00:38:52.163 rmmod nvme_keyring 00:38:52.163 14:41:01 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:52.163 14:41:01 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:38:52.163 14:41:01 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:38:52.163 14:41:01 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1565538 ']' 00:38:52.163 14:41:01 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1565538 00:38:52.163 14:41:01 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1565538 ']' 00:38:52.163 14:41:01 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1565538 00:38:52.163 14:41:01 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:38:52.163 14:41:01 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:52.163 14:41:01 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1565538 00:38:52.163 14:41:01 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:52.163 14:41:01 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:52.163 14:41:01 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1565538' 00:38:52.163 killing process with pid 1565538 00:38:52.163 14:41:01 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1565538 00:38:52.163 14:41:01 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1565538 00:38:53.097 14:41:02 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:38:53.097 14:41:02 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:54.476 Waiting for block devices as requested 00:38:54.476 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:38:54.476 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:38:54.476 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:38:54.476 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:38:54.734 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:38:54.734 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:38:54.734 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:38:54.734 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:38:54.992 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:38:54.992 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:38:54.992 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:38:54.992 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:38:55.307 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:38:55.307 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:38:55.307 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:38:55.307 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:38:55.564 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:38:55.564 14:41:04 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:55.564 14:41:04 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:55.564 14:41:04 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:55.564 14:41:04 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:55.564 14:41:04 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:55.564 14:41:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:55.564 14:41:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:58.092 14:41:06 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:58.092 00:38:58.092 real 1m15.440s 00:38:58.092 user 6m39.834s 00:38:58.092 sys 0m21.807s 00:38:58.092 14:41:06 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:58.092 14:41:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:58.092 ************************************ 00:38:58.092 END TEST nvmf_dif 00:38:58.092 ************************************ 00:38:58.092 14:41:06 -- common/autotest_common.sh@1142 -- # return 0 00:38:58.092 14:41:06 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:58.092 14:41:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:58.092 14:41:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:58.092 14:41:06 -- common/autotest_common.sh@10 -- # set +x 00:38:58.092 ************************************ 00:38:58.092 START TEST nvmf_abort_qd_sizes 00:38:58.092 ************************************ 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:58.092 * Looking for test storage... 00:38:58.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:58.092 14:41:07 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:38:58.093 14:41:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:59.467 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:59.467 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:59.467 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:59.467 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:59.467 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:59.725 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:59.725 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:59.725 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:59.725 14:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:59.725 14:41:09 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:59.725 14:41:09 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:59.725 14:41:09 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:59.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:59.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:38:59.725 00:38:59.725 --- 10.0.0.2 ping statistics --- 00:38:59.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.725 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:38:59.725 14:41:09 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:59.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:59.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:38:59.725 00:38:59.725 --- 10.0.0.1 ping statistics --- 00:38:59.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.725 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:38:59.725 14:41:09 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:59.725 14:41:09 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:38:59.725 14:41:09 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:38:59.725 14:41:09 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:01.097 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:01.098 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:01.098 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:01.098 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:01.098 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:01.098 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:01.098 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:01.098 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:01.098 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:01.098 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:01.098 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:01.098 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:01.098 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:01.098 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:01.098 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:01.098 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:02.034 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:39:02.034 14:41:11 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:02.034 14:41:11 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:02.034 14:41:11 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:02.034 14:41:11 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:02.034 14:41:11 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:02.034 14:41:11 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:02.034 14:41:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:02.034 14:41:11 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:02.034 14:41:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:02.034 14:41:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:02.034 14:41:11 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1577350 00:39:02.034 14:41:11 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:02.034 14:41:11 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1577350 00:39:02.034 14:41:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1577350 ']' 00:39:02.034 14:41:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:02.034 14:41:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:02.034 14:41:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:02.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:02.034 14:41:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:02.034 14:41:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:02.034 [2024-07-10 14:41:11.442177] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:39:02.034 [2024-07-10 14:41:11.442341] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:02.292 EAL: No free 2048 kB hugepages reported on node 1 00:39:02.292 [2024-07-10 14:41:11.584864] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:02.550 [2024-07-10 14:41:11.843896] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:02.550 [2024-07-10 14:41:11.843962] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:02.550 [2024-07-10 14:41:11.843998] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:02.550 [2024-07-10 14:41:11.844019] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:02.550 [2024-07-10 14:41:11.844041] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:02.550 [2024-07-10 14:41:11.844169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:02.550 [2024-07-10 14:41:11.844234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:02.550 [2024-07-10 14:41:11.844319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:02.550 [2024-07-10 14:41:11.844329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:39:03.116 14:41:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:03.116 14:41:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:39:03.116 14:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:03.116 14:41:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:03.116 14:41:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:03.116 14:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:03.116 14:41:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:03.117 14:41:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:03.117 14:41:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:03.117 14:41:12 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:39:03.117 14:41:12 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:39:03.117 14:41:12 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:39:03.117 14:41:12 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:39:03.117 14:41:12 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:39:03.117 14:41:12 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:39:03.117 14:41:12 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:39:03.117 14:41:12 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:39:03.117 14:41:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:39:03.117 14:41:12 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:39:03.117 14:41:12 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:39:03.117 14:41:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:39:03.117 14:41:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:39:03.117 14:41:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:03.117 14:41:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:03.117 14:41:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:03.117 14:41:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:03.117 ************************************ 00:39:03.117 START TEST spdk_target_abort 00:39:03.117 ************************************ 00:39:03.117 14:41:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:39:03.117 14:41:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:03.117 14:41:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:39:03.117 14:41:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.117 14:41:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:06.395 spdk_targetn1 00:39:06.395 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.395 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:06.396 [2024-07-10 14:41:15.272512] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:06.396 [2024-07-10 14:41:15.318850] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:06.396 14:41:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:06.396 EAL: No free 2048 kB hugepages reported on node 1 00:39:09.677 Initializing NVMe Controllers 00:39:09.677 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:09.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:09.677 Initialization complete. Launching workers. 00:39:09.677 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8994, failed: 0 00:39:09.677 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1225, failed to submit 7769 00:39:09.677 success 792, unsuccess 433, failed 0 00:39:09.677 14:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:09.677 14:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:09.677 EAL: No free 2048 kB hugepages reported on node 1 00:39:12.957 Initializing NVMe Controllers 00:39:12.957 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:12.957 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:12.957 Initialization complete. Launching workers. 00:39:12.957 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8645, failed: 0 00:39:12.957 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1229, failed to submit 7416 00:39:12.957 success 339, unsuccess 890, failed 0 00:39:12.957 14:41:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:12.957 14:41:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:12.957 EAL: No free 2048 kB hugepages reported on node 1 00:39:16.307 Initializing NVMe Controllers 00:39:16.307 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:16.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:16.307 Initialization complete. Launching workers. 00:39:16.307 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 26076, failed: 0 00:39:16.307 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2622, failed to submit 23454 00:39:16.307 success 187, unsuccess 2435, failed 0 00:39:16.307 14:41:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:16.307 14:41:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.307 14:41:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:16.307 14:41:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.307 14:41:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:16.307 14:41:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.307 14:41:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:17.681 14:41:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.681 14:41:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1577350 00:39:17.681 14:41:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1577350 ']' 00:39:17.681 14:41:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1577350 00:39:17.681 14:41:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:39:17.681 14:41:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:17.681 14:41:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1577350 00:39:17.681 14:41:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:17.681 14:41:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:17.681 14:41:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1577350' 00:39:17.681 killing process with pid 1577350 00:39:17.681 14:41:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1577350 00:39:17.681 14:41:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1577350 00:39:18.615 00:39:18.615 real 0m15.442s 00:39:18.615 user 0m58.649s 00:39:18.615 sys 0m3.003s 00:39:18.615 14:41:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:18.615 14:41:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:18.615 ************************************ 00:39:18.615 END TEST spdk_target_abort 00:39:18.615 ************************************ 00:39:18.615 14:41:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:39:18.615 14:41:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:39:18.615 14:41:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:18.615 14:41:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:18.615 14:41:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:18.615 ************************************ 00:39:18.615 START TEST kernel_target_abort 00:39:18.615 ************************************ 00:39:18.615 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:39:18.615 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:39:18.616 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:39:18.616 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:18.616 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:18.616 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:18.616 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:18.616 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:18.616 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:18.616 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:18.616 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:18.616 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:18.616 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:18.616 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:18.616 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:39:18.616 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:18.616 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:18.616 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:18.616 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:39:18.616 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:39:18.616 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:39:18.616 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:18.616 14:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:19.550 Waiting for block devices as requested 00:39:19.550 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:39:19.809 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:19.809 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:19.809 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:20.067 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:20.067 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:20.067 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:20.067 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:20.325 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:20.325 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:20.325 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:20.325 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:20.582 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:20.582 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:20.582 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:20.840 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:20.840 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:21.097 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:39:21.097 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:21.097 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:39:21.097 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:39:21.097 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:21.097 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:39:21.097 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:39:21.097 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:39:21.097 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:21.356 No valid GPT data, bailing 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:39:21.356 00:39:21.356 Discovery Log Number of Records 2, Generation counter 2 00:39:21.356 =====Discovery Log Entry 0====== 00:39:21.356 trtype: tcp 00:39:21.356 adrfam: ipv4 00:39:21.356 subtype: current discovery subsystem 00:39:21.356 treq: not specified, sq flow control disable supported 00:39:21.356 portid: 1 00:39:21.356 trsvcid: 4420 00:39:21.356 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:21.356 traddr: 10.0.0.1 00:39:21.356 eflags: none 00:39:21.356 sectype: none 00:39:21.356 =====Discovery Log Entry 1====== 00:39:21.356 trtype: tcp 00:39:21.356 adrfam: ipv4 00:39:21.356 subtype: nvme subsystem 00:39:21.356 treq: not specified, sq flow control disable supported 00:39:21.356 portid: 1 00:39:21.356 trsvcid: 4420 00:39:21.356 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:21.356 traddr: 10.0.0.1 00:39:21.356 eflags: none 00:39:21.356 sectype: none 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:21.356 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:21.357 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:39:21.357 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:21.357 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:39:21.357 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:21.357 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:21.357 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:21.357 14:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:21.357 EAL: No free 2048 kB hugepages reported on node 1 00:39:24.633 Initializing NVMe Controllers 00:39:24.633 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:24.633 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:24.633 Initialization complete. Launching workers. 00:39:24.633 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 26706, failed: 0 00:39:24.633 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26706, failed to submit 0 00:39:24.633 success 0, unsuccess 26706, failed 0 00:39:24.633 14:41:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:24.633 14:41:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:24.633 EAL: No free 2048 kB hugepages reported on node 1 00:39:27.910 Initializing NVMe Controllers 00:39:27.910 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:27.910 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:27.910 Initialization complete. Launching workers. 00:39:27.910 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 51970, failed: 0 00:39:27.910 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13094, failed to submit 38876 00:39:27.910 success 0, unsuccess 13094, failed 0 00:39:27.910 14:41:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:27.910 14:41:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:27.910 EAL: No free 2048 kB hugepages reported on node 1 00:39:31.186 Initializing NVMe Controllers 00:39:31.186 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:31.186 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:31.186 Initialization complete. Launching workers. 00:39:31.186 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55630, failed: 0 00:39:31.186 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13882, failed to submit 41748 00:39:31.186 success 0, unsuccess 13882, failed 0 00:39:31.186 14:41:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:39:31.186 14:41:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:31.186 14:41:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:39:31.186 14:41:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:31.186 14:41:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:31.186 14:41:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:31.186 14:41:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:31.186 14:41:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:39:31.186 14:41:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:39:31.186 14:41:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:32.118 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:32.118 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:32.118 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:32.118 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:32.118 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:32.118 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:32.118 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:32.118 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:32.118 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:32.118 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:32.118 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:32.118 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:32.118 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:32.118 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:32.373 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:32.373 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:33.308 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:39:33.308 00:39:33.308 real 0m14.778s 00:39:33.308 user 0m5.763s 00:39:33.308 sys 0m3.573s 00:39:33.308 14:41:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:33.308 14:41:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:33.308 ************************************ 00:39:33.308 END TEST kernel_target_abort 00:39:33.308 ************************************ 00:39:33.308 14:41:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:39:33.308 14:41:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:33.308 14:41:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:39:33.308 14:41:42 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:33.308 14:41:42 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:39:33.308 14:41:42 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:33.308 14:41:42 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:39:33.308 14:41:42 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:33.308 14:41:42 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:33.308 rmmod nvme_tcp 00:39:33.308 rmmod nvme_fabrics 00:39:33.308 rmmod nvme_keyring 00:39:33.308 14:41:42 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:33.308 14:41:42 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:39:33.308 14:41:42 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:39:33.308 14:41:42 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1577350 ']' 00:39:33.308 14:41:42 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1577350 00:39:33.308 14:41:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1577350 ']' 00:39:33.308 14:41:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1577350 00:39:33.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1577350) - No such process 00:39:33.308 14:41:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1577350 is not found' 00:39:33.308 Process with pid 1577350 is not found 00:39:33.308 14:41:42 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:39:33.308 14:41:42 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:34.241 Waiting for block devices as requested 00:39:34.500 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:39:34.500 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:34.757 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:34.757 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:34.757 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:34.757 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:35.014 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:35.014 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:35.014 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:35.014 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:35.272 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:35.272 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:35.272 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:35.272 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:35.529 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:35.529 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:35.529 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:35.788 14:41:45 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:35.788 14:41:45 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:35.788 14:41:45 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:35.788 14:41:45 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:35.788 14:41:45 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:35.788 14:41:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:35.788 14:41:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.686 14:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:37.686 00:39:37.686 real 0m40.067s 00:39:37.686 user 1m6.601s 00:39:37.686 sys 0m9.911s 00:39:37.686 14:41:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:37.686 14:41:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:37.686 ************************************ 00:39:37.686 END TEST nvmf_abort_qd_sizes 00:39:37.686 ************************************ 00:39:37.686 14:41:47 -- common/autotest_common.sh@1142 -- # return 0 00:39:37.686 14:41:47 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:37.686 14:41:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:37.686 14:41:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:37.686 14:41:47 -- common/autotest_common.sh@10 -- # set +x 00:39:37.686 ************************************ 00:39:37.686 START TEST keyring_file 00:39:37.686 ************************************ 00:39:37.686 14:41:47 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:37.686 * Looking for test storage... 00:39:37.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:37.945 14:41:47 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:37.945 14:41:47 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:37.945 14:41:47 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:37.945 14:41:47 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:37.945 14:41:47 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:37.945 14:41:47 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:37.945 14:41:47 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:37.945 14:41:47 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:37.945 14:41:47 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:37.945 14:41:47 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:37.945 14:41:47 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:37.945 14:41:47 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:37.945 14:41:47 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:37.945 14:41:47 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:37.945 14:41:47 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:37.945 14:41:47 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:37.945 14:41:47 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:37.945 14:41:47 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:37.945 14:41:47 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:37.945 14:41:47 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:37.945 14:41:47 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:37.945 14:41:47 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:37.945 14:41:47 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:37.945 14:41:47 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.945 14:41:47 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.945 14:41:47 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.945 14:41:47 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:37.946 14:41:47 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.946 14:41:47 keyring_file -- nvmf/common.sh@47 -- # : 0 00:39:37.946 14:41:47 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:37.946 14:41:47 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:37.946 14:41:47 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:37.946 14:41:47 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:37.946 14:41:47 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:37.946 14:41:47 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:37.946 14:41:47 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:37.946 14:41:47 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:37.946 14:41:47 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:37.946 14:41:47 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:37.946 14:41:47 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:37.946 14:41:47 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:37.946 14:41:47 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:37.946 14:41:47 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:37.946 14:41:47 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:37.946 14:41:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:37.946 14:41:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:37.946 14:41:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:37.946 14:41:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:37.946 14:41:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:37.946 14:41:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8kCdNaYTrK 00:39:37.946 14:41:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:37.946 14:41:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:37.946 14:41:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:37.946 14:41:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:37.946 14:41:47 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:37.946 14:41:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:37.946 14:41:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:37.946 14:41:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8kCdNaYTrK 00:39:37.946 14:41:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8kCdNaYTrK 00:39:37.946 14:41:47 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.8kCdNaYTrK 00:39:37.946 14:41:47 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:37.946 14:41:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:37.946 14:41:47 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:37.946 14:41:47 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:37.946 14:41:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:37.946 14:41:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:37.946 14:41:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Jaae3OifPg 00:39:37.946 14:41:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:37.946 14:41:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:37.946 14:41:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:37.946 14:41:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:37.946 14:41:47 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:39:37.946 14:41:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:37.946 14:41:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:37.946 14:41:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Jaae3OifPg 00:39:37.946 14:41:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Jaae3OifPg 00:39:37.946 14:41:47 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Jaae3OifPg 00:39:37.946 14:41:47 keyring_file -- keyring/file.sh@30 -- # tgtpid=1583576 00:39:37.946 14:41:47 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:37.946 14:41:47 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1583576 00:39:37.946 14:41:47 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1583576 ']' 00:39:37.946 14:41:47 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:37.946 14:41:47 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:37.946 14:41:47 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:37.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:37.946 14:41:47 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:37.946 14:41:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:37.946 [2024-07-10 14:41:47.372013] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:39:37.946 [2024-07-10 14:41:47.372150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583576 ] 00:39:38.204 EAL: No free 2048 kB hugepages reported on node 1 00:39:38.204 [2024-07-10 14:41:47.499338] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:38.461 [2024-07-10 14:41:47.755070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:39:39.395 14:41:48 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:39.395 [2024-07-10 14:41:48.612915] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:39.395 null0 00:39:39.395 [2024-07-10 14:41:48.644931] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:39.395 [2024-07-10 14:41:48.645493] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:39.395 [2024-07-10 14:41:48.652984] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:39.395 14:41:48 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:39.395 [2024-07-10 14:41:48.660988] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:39.395 request: 00:39:39.395 { 00:39:39.395 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:39.395 "secure_channel": false, 00:39:39.395 "listen_address": { 00:39:39.395 "trtype": "tcp", 00:39:39.395 "traddr": "127.0.0.1", 00:39:39.395 "trsvcid": "4420" 00:39:39.395 }, 00:39:39.395 "method": "nvmf_subsystem_add_listener", 00:39:39.395 "req_id": 1 00:39:39.395 } 00:39:39.395 Got JSON-RPC error response 00:39:39.395 response: 00:39:39.395 { 00:39:39.395 "code": -32602, 00:39:39.395 "message": "Invalid parameters" 00:39:39.395 } 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:39.395 14:41:48 keyring_file -- keyring/file.sh@46 -- # bperfpid=1583717 00:39:39.395 14:41:48 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:39.395 14:41:48 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1583717 /var/tmp/bperf.sock 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1583717 ']' 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:39.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:39.395 14:41:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:39.395 [2024-07-10 14:41:48.742919] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:39:39.395 [2024-07-10 14:41:48.743052] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583717 ] 00:39:39.395 EAL: No free 2048 kB hugepages reported on node 1 00:39:39.395 [2024-07-10 14:41:48.870566] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.653 [2024-07-10 14:41:49.124575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:40.229 14:41:49 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:40.229 14:41:49 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:39:40.229 14:41:49 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8kCdNaYTrK 00:39:40.229 14:41:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8kCdNaYTrK 00:39:40.487 14:41:49 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Jaae3OifPg 00:39:40.487 14:41:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Jaae3OifPg 00:39:40.744 14:41:50 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:39:40.744 14:41:50 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:39:40.744 14:41:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:40.744 14:41:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:40.744 14:41:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:41.002 14:41:50 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.8kCdNaYTrK == \/\t\m\p\/\t\m\p\.\8\k\C\d\N\a\Y\T\r\K ]] 00:39:41.002 14:41:50 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:39:41.002 14:41:50 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:41.002 14:41:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:41.002 14:41:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:41.002 14:41:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:41.261 14:41:50 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Jaae3OifPg == \/\t\m\p\/\t\m\p\.\J\a\a\e\3\O\i\f\P\g ]] 00:39:41.261 14:41:50 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:39:41.261 14:41:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:41.261 14:41:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:41.261 14:41:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:41.261 14:41:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:41.261 14:41:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:41.519 14:41:50 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:39:41.519 14:41:50 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:39:41.519 14:41:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:41.519 14:41:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:41.519 14:41:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:41.519 14:41:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:41.519 14:41:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:41.777 14:41:51 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:41.777 14:41:51 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:41.777 14:41:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:42.035 [2024-07-10 14:41:51.385767] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:42.035 nvme0n1 00:39:42.035 14:41:51 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:39:42.035 14:41:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:42.035 14:41:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:42.035 14:41:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:42.035 14:41:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:42.035 14:41:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:42.293 14:41:51 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:39:42.293 14:41:51 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:39:42.293 14:41:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:42.293 14:41:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:42.293 14:41:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:42.293 14:41:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:42.293 14:41:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:42.552 14:41:51 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:39:42.552 14:41:51 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:42.810 Running I/O for 1 seconds... 00:39:43.744 00:39:43.744 Latency(us) 00:39:43.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:43.744 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:43.744 nvme0n1 : 1.03 3404.52 13.30 0.00 0.00 37005.58 8204.14 40389.59 00:39:43.744 =================================================================================================================== 00:39:43.744 Total : 3404.52 13.30 0.00 0.00 37005.58 8204.14 40389.59 00:39:43.744 0 00:39:43.744 14:41:53 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:43.744 14:41:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:44.001 14:41:53 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:39:44.001 14:41:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:44.001 14:41:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:44.001 14:41:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:44.001 14:41:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:44.001 14:41:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:44.259 14:41:53 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:39:44.259 14:41:53 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:39:44.259 14:41:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:44.259 14:41:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:44.259 14:41:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:44.259 14:41:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:44.259 14:41:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:44.518 14:41:53 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:44.518 14:41:53 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:44.518 14:41:53 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:39:44.518 14:41:53 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:44.518 14:41:53 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:39:44.518 14:41:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:44.518 14:41:53 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:39:44.518 14:41:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:44.518 14:41:53 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:44.518 14:41:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:44.776 [2024-07-10 14:41:54.121297] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:44.776 [2024-07-10 14:41:54.122238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (107): Transport endpoint is not connected 00:39:44.776 [2024-07-10 14:41:54.123209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:39:44.776 [2024-07-10 14:41:54.124205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:44.776 [2024-07-10 14:41:54.124239] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:44.776 [2024-07-10 14:41:54.124271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:44.776 request: 00:39:44.776 { 00:39:44.776 "name": "nvme0", 00:39:44.776 "trtype": "tcp", 00:39:44.776 "traddr": "127.0.0.1", 00:39:44.776 "adrfam": "ipv4", 00:39:44.776 "trsvcid": "4420", 00:39:44.776 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:44.776 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:44.776 "prchk_reftag": false, 00:39:44.776 "prchk_guard": false, 00:39:44.776 "hdgst": false, 00:39:44.776 "ddgst": false, 00:39:44.776 "psk": "key1", 00:39:44.776 "method": "bdev_nvme_attach_controller", 00:39:44.776 "req_id": 1 00:39:44.776 } 00:39:44.776 Got JSON-RPC error response 00:39:44.776 response: 00:39:44.776 { 00:39:44.776 "code": -5, 00:39:44.776 "message": "Input/output error" 00:39:44.776 } 00:39:44.776 14:41:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:39:44.776 14:41:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:44.776 14:41:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:44.776 14:41:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:44.776 14:41:54 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:39:44.776 14:41:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:44.776 14:41:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:44.776 14:41:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:44.776 14:41:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:44.776 14:41:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:45.034 14:41:54 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:39:45.034 14:41:54 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:39:45.034 14:41:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:45.034 14:41:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:45.034 14:41:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:45.034 14:41:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:45.034 14:41:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:45.292 14:41:54 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:45.292 14:41:54 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:39:45.292 14:41:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:45.550 14:41:54 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:39:45.550 14:41:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:45.808 14:41:55 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:39:45.808 14:41:55 keyring_file -- keyring/file.sh@77 -- # jq length 00:39:45.808 14:41:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:46.067 14:41:55 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:39:46.067 14:41:55 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.8kCdNaYTrK 00:39:46.067 14:41:55 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.8kCdNaYTrK 00:39:46.067 14:41:55 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:39:46.067 14:41:55 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.8kCdNaYTrK 00:39:46.067 14:41:55 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:39:46.067 14:41:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:46.067 14:41:55 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:39:46.067 14:41:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:46.067 14:41:55 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8kCdNaYTrK 00:39:46.067 14:41:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8kCdNaYTrK 00:39:46.325 [2024-07-10 14:41:55.612184] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.8kCdNaYTrK': 0100660 00:39:46.325 [2024-07-10 14:41:55.612244] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:46.325 request: 00:39:46.325 { 00:39:46.325 "name": "key0", 00:39:46.325 "path": "/tmp/tmp.8kCdNaYTrK", 00:39:46.325 "method": "keyring_file_add_key", 00:39:46.325 "req_id": 1 00:39:46.325 } 00:39:46.325 Got JSON-RPC error response 00:39:46.325 response: 00:39:46.325 { 00:39:46.325 "code": -1, 00:39:46.325 "message": "Operation not permitted" 00:39:46.325 } 00:39:46.325 14:41:55 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:39:46.325 14:41:55 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:46.325 14:41:55 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:46.325 14:41:55 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:46.325 14:41:55 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.8kCdNaYTrK 00:39:46.325 14:41:55 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8kCdNaYTrK 00:39:46.325 14:41:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8kCdNaYTrK 00:39:46.584 14:41:55 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.8kCdNaYTrK 00:39:46.584 14:41:55 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:39:46.584 14:41:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:46.584 14:41:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:46.584 14:41:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:46.584 14:41:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:46.584 14:41:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:46.842 14:41:56 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:39:46.843 14:41:56 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:46.843 14:41:56 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:39:46.843 14:41:56 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:46.843 14:41:56 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:39:46.843 14:41:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:46.843 14:41:56 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:39:46.843 14:41:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:46.843 14:41:56 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:46.843 14:41:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:47.101 [2024-07-10 14:41:56.374366] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.8kCdNaYTrK': No such file or directory 00:39:47.101 [2024-07-10 14:41:56.374445] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:47.101 [2024-07-10 14:41:56.374501] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:47.101 [2024-07-10 14:41:56.374519] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:47.101 [2024-07-10 14:41:56.374538] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:47.101 request: 00:39:47.101 { 00:39:47.101 "name": "nvme0", 00:39:47.101 "trtype": "tcp", 00:39:47.101 "traddr": "127.0.0.1", 00:39:47.101 "adrfam": "ipv4", 00:39:47.101 "trsvcid": "4420", 00:39:47.101 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:47.101 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:47.101 "prchk_reftag": false, 00:39:47.101 "prchk_guard": false, 00:39:47.101 "hdgst": false, 00:39:47.101 "ddgst": false, 00:39:47.101 "psk": "key0", 00:39:47.101 "method": "bdev_nvme_attach_controller", 00:39:47.101 "req_id": 1 00:39:47.101 } 00:39:47.101 Got JSON-RPC error response 00:39:47.101 response: 00:39:47.101 { 00:39:47.101 "code": -19, 00:39:47.101 "message": "No such device" 00:39:47.101 } 00:39:47.101 14:41:56 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:39:47.101 14:41:56 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:47.101 14:41:56 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:47.101 14:41:56 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:47.101 14:41:56 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:39:47.101 14:41:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:47.359 14:41:56 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:47.359 14:41:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:47.359 14:41:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:47.359 14:41:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:47.359 14:41:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:47.359 14:41:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:47.359 14:41:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4kTwjbVwiZ 00:39:47.359 14:41:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:47.359 14:41:56 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:47.359 14:41:56 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:47.359 14:41:56 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:47.359 14:41:56 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:47.359 14:41:56 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:47.359 14:41:56 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:47.359 14:41:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4kTwjbVwiZ 00:39:47.359 14:41:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4kTwjbVwiZ 00:39:47.359 14:41:56 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.4kTwjbVwiZ 00:39:47.359 14:41:56 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4kTwjbVwiZ 00:39:47.359 14:41:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4kTwjbVwiZ 00:39:47.625 14:41:56 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:47.625 14:41:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:47.939 nvme0n1 00:39:47.939 14:41:57 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:39:47.939 14:41:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:47.939 14:41:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:47.939 14:41:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:47.939 14:41:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:47.939 14:41:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:48.224 14:41:57 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:39:48.224 14:41:57 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:39:48.224 14:41:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:48.481 14:41:57 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:39:48.481 14:41:57 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:39:48.481 14:41:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:48.481 14:41:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:48.481 14:41:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:48.739 14:41:57 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:39:48.739 14:41:57 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:39:48.739 14:41:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:48.739 14:41:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:48.739 14:41:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:48.739 14:41:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:48.739 14:41:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:48.998 14:41:58 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:39:48.998 14:41:58 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:48.998 14:41:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:49.256 14:41:58 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:39:49.256 14:41:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:49.256 14:41:58 keyring_file -- keyring/file.sh@104 -- # jq length 00:39:49.514 14:41:58 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:39:49.514 14:41:58 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4kTwjbVwiZ 00:39:49.514 14:41:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4kTwjbVwiZ 00:39:49.514 14:41:58 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Jaae3OifPg 00:39:49.514 14:41:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Jaae3OifPg 00:39:49.771 14:41:59 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:49.771 14:41:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:50.337 nvme0n1 00:39:50.337 14:41:59 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:39:50.337 14:41:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:50.596 14:41:59 keyring_file -- keyring/file.sh@112 -- # config='{ 00:39:50.596 "subsystems": [ 00:39:50.596 { 00:39:50.596 "subsystem": "keyring", 00:39:50.596 "config": [ 00:39:50.596 { 00:39:50.596 "method": "keyring_file_add_key", 00:39:50.596 "params": { 00:39:50.596 "name": "key0", 00:39:50.596 "path": "/tmp/tmp.4kTwjbVwiZ" 00:39:50.596 } 00:39:50.596 }, 00:39:50.596 { 00:39:50.596 "method": "keyring_file_add_key", 00:39:50.596 "params": { 00:39:50.596 "name": "key1", 00:39:50.596 "path": "/tmp/tmp.Jaae3OifPg" 00:39:50.596 } 00:39:50.596 } 00:39:50.596 ] 00:39:50.596 }, 00:39:50.596 { 00:39:50.596 "subsystem": "iobuf", 00:39:50.596 "config": [ 00:39:50.596 { 00:39:50.596 "method": "iobuf_set_options", 00:39:50.596 "params": { 00:39:50.596 "small_pool_count": 8192, 00:39:50.596 "large_pool_count": 1024, 00:39:50.596 "small_bufsize": 8192, 00:39:50.596 "large_bufsize": 135168 00:39:50.596 } 00:39:50.596 } 00:39:50.596 ] 00:39:50.596 }, 00:39:50.596 { 00:39:50.596 "subsystem": "sock", 00:39:50.596 "config": [ 00:39:50.596 { 00:39:50.596 "method": "sock_set_default_impl", 00:39:50.596 "params": { 00:39:50.596 "impl_name": "posix" 00:39:50.596 } 00:39:50.596 }, 00:39:50.596 { 00:39:50.596 "method": "sock_impl_set_options", 00:39:50.596 "params": { 00:39:50.596 "impl_name": "ssl", 00:39:50.596 "recv_buf_size": 4096, 00:39:50.596 "send_buf_size": 4096, 00:39:50.596 "enable_recv_pipe": true, 00:39:50.596 "enable_quickack": false, 00:39:50.596 "enable_placement_id": 0, 00:39:50.596 "enable_zerocopy_send_server": true, 00:39:50.596 "enable_zerocopy_send_client": false, 00:39:50.596 "zerocopy_threshold": 0, 00:39:50.596 "tls_version": 0, 00:39:50.596 "enable_ktls": false 00:39:50.596 } 00:39:50.596 }, 00:39:50.596 { 00:39:50.596 "method": "sock_impl_set_options", 00:39:50.596 "params": { 00:39:50.596 "impl_name": "posix", 00:39:50.596 "recv_buf_size": 2097152, 00:39:50.596 "send_buf_size": 2097152, 00:39:50.596 "enable_recv_pipe": true, 00:39:50.596 "enable_quickack": false, 00:39:50.596 "enable_placement_id": 0, 00:39:50.596 "enable_zerocopy_send_server": true, 00:39:50.596 "enable_zerocopy_send_client": false, 00:39:50.596 "zerocopy_threshold": 0, 00:39:50.596 "tls_version": 0, 00:39:50.596 "enable_ktls": false 00:39:50.596 } 00:39:50.596 } 00:39:50.596 ] 00:39:50.596 }, 00:39:50.596 { 00:39:50.596 "subsystem": "vmd", 00:39:50.596 "config": [] 00:39:50.596 }, 00:39:50.596 { 00:39:50.596 "subsystem": "accel", 00:39:50.596 "config": [ 00:39:50.596 { 00:39:50.596 "method": "accel_set_options", 00:39:50.596 "params": { 00:39:50.596 "small_cache_size": 128, 00:39:50.596 "large_cache_size": 16, 00:39:50.596 "task_count": 2048, 00:39:50.596 "sequence_count": 2048, 00:39:50.596 "buf_count": 2048 00:39:50.596 } 00:39:50.596 } 00:39:50.596 ] 00:39:50.596 }, 00:39:50.596 { 00:39:50.596 "subsystem": "bdev", 00:39:50.596 "config": [ 00:39:50.596 { 00:39:50.596 "method": "bdev_set_options", 00:39:50.596 "params": { 00:39:50.596 "bdev_io_pool_size": 65535, 00:39:50.596 "bdev_io_cache_size": 256, 00:39:50.596 "bdev_auto_examine": true, 00:39:50.596 "iobuf_small_cache_size": 128, 00:39:50.596 "iobuf_large_cache_size": 16 00:39:50.596 } 00:39:50.596 }, 00:39:50.596 { 00:39:50.596 "method": "bdev_raid_set_options", 00:39:50.596 "params": { 00:39:50.596 "process_window_size_kb": 1024 00:39:50.596 } 00:39:50.596 }, 00:39:50.596 { 00:39:50.596 "method": "bdev_iscsi_set_options", 00:39:50.596 "params": { 00:39:50.596 "timeout_sec": 30 00:39:50.596 } 00:39:50.596 }, 00:39:50.596 { 00:39:50.596 "method": "bdev_nvme_set_options", 00:39:50.596 "params": { 00:39:50.596 "action_on_timeout": "none", 00:39:50.596 "timeout_us": 0, 00:39:50.596 "timeout_admin_us": 0, 00:39:50.596 "keep_alive_timeout_ms": 10000, 00:39:50.596 "arbitration_burst": 0, 00:39:50.596 "low_priority_weight": 0, 00:39:50.596 "medium_priority_weight": 0, 00:39:50.596 "high_priority_weight": 0, 00:39:50.596 "nvme_adminq_poll_period_us": 10000, 00:39:50.596 "nvme_ioq_poll_period_us": 0, 00:39:50.596 "io_queue_requests": 512, 00:39:50.596 "delay_cmd_submit": true, 00:39:50.596 "transport_retry_count": 4, 00:39:50.596 "bdev_retry_count": 3, 00:39:50.596 "transport_ack_timeout": 0, 00:39:50.596 "ctrlr_loss_timeout_sec": 0, 00:39:50.596 "reconnect_delay_sec": 0, 00:39:50.596 "fast_io_fail_timeout_sec": 0, 00:39:50.596 "disable_auto_failback": false, 00:39:50.596 "generate_uuids": false, 00:39:50.596 "transport_tos": 0, 00:39:50.596 "nvme_error_stat": false, 00:39:50.596 "rdma_srq_size": 0, 00:39:50.596 "io_path_stat": false, 00:39:50.596 "allow_accel_sequence": false, 00:39:50.596 "rdma_max_cq_size": 0, 00:39:50.596 "rdma_cm_event_timeout_ms": 0, 00:39:50.596 "dhchap_digests": [ 00:39:50.596 "sha256", 00:39:50.596 "sha384", 00:39:50.596 "sha512" 00:39:50.596 ], 00:39:50.596 "dhchap_dhgroups": [ 00:39:50.596 "null", 00:39:50.596 "ffdhe2048", 00:39:50.596 "ffdhe3072", 00:39:50.596 "ffdhe4096", 00:39:50.596 "ffdhe6144", 00:39:50.596 "ffdhe8192" 00:39:50.596 ] 00:39:50.596 } 00:39:50.596 }, 00:39:50.596 { 00:39:50.596 "method": "bdev_nvme_attach_controller", 00:39:50.596 "params": { 00:39:50.596 "name": "nvme0", 00:39:50.596 "trtype": "TCP", 00:39:50.596 "adrfam": "IPv4", 00:39:50.596 "traddr": "127.0.0.1", 00:39:50.596 "trsvcid": "4420", 00:39:50.596 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:50.596 "prchk_reftag": false, 00:39:50.596 "prchk_guard": false, 00:39:50.596 "ctrlr_loss_timeout_sec": 0, 00:39:50.596 "reconnect_delay_sec": 0, 00:39:50.596 "fast_io_fail_timeout_sec": 0, 00:39:50.596 "psk": "key0", 00:39:50.596 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:50.596 "hdgst": false, 00:39:50.596 "ddgst": false 00:39:50.596 } 00:39:50.596 }, 00:39:50.596 { 00:39:50.596 "method": "bdev_nvme_set_hotplug", 00:39:50.596 "params": { 00:39:50.596 "period_us": 100000, 00:39:50.596 "enable": false 00:39:50.596 } 00:39:50.596 }, 00:39:50.596 { 00:39:50.596 "method": "bdev_wait_for_examine" 00:39:50.596 } 00:39:50.596 ] 00:39:50.596 }, 00:39:50.596 { 00:39:50.596 "subsystem": "nbd", 00:39:50.596 "config": [] 00:39:50.596 } 00:39:50.596 ] 00:39:50.596 }' 00:39:50.597 14:41:59 keyring_file -- keyring/file.sh@114 -- # killprocess 1583717 00:39:50.597 14:41:59 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1583717 ']' 00:39:50.597 14:41:59 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1583717 00:39:50.597 14:41:59 keyring_file -- common/autotest_common.sh@953 -- # uname 00:39:50.597 14:41:59 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:50.597 14:41:59 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1583717 00:39:50.597 14:41:59 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:39:50.597 14:41:59 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:39:50.597 14:41:59 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1583717' 00:39:50.597 killing process with pid 1583717 00:39:50.597 14:41:59 keyring_file -- common/autotest_common.sh@967 -- # kill 1583717 00:39:50.597 Received shutdown signal, test time was about 1.000000 seconds 00:39:50.597 00:39:50.597 Latency(us) 00:39:50.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:50.597 =================================================================================================================== 00:39:50.597 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:50.597 14:41:59 keyring_file -- common/autotest_common.sh@972 -- # wait 1583717 00:39:51.530 14:42:00 keyring_file -- keyring/file.sh@117 -- # bperfpid=1585365 00:39:51.530 14:42:00 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1585365 /var/tmp/bperf.sock 00:39:51.530 14:42:00 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1585365 ']' 00:39:51.530 14:42:00 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:51.530 14:42:00 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:51.530 14:42:00 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:51.530 14:42:00 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:39:51.530 "subsystems": [ 00:39:51.530 { 00:39:51.530 "subsystem": "keyring", 00:39:51.530 "config": [ 00:39:51.530 { 00:39:51.530 "method": "keyring_file_add_key", 00:39:51.530 "params": { 00:39:51.530 "name": "key0", 00:39:51.530 "path": "/tmp/tmp.4kTwjbVwiZ" 00:39:51.530 } 00:39:51.530 }, 00:39:51.530 { 00:39:51.530 "method": "keyring_file_add_key", 00:39:51.530 "params": { 00:39:51.530 "name": "key1", 00:39:51.530 "path": "/tmp/tmp.Jaae3OifPg" 00:39:51.530 } 00:39:51.530 } 00:39:51.530 ] 00:39:51.530 }, 00:39:51.530 { 00:39:51.530 "subsystem": "iobuf", 00:39:51.530 "config": [ 00:39:51.530 { 00:39:51.530 "method": "iobuf_set_options", 00:39:51.530 "params": { 00:39:51.530 "small_pool_count": 8192, 00:39:51.530 "large_pool_count": 1024, 00:39:51.530 "small_bufsize": 8192, 00:39:51.530 "large_bufsize": 135168 00:39:51.530 } 00:39:51.530 } 00:39:51.530 ] 00:39:51.530 }, 00:39:51.530 { 00:39:51.530 "subsystem": "sock", 00:39:51.530 "config": [ 00:39:51.530 { 00:39:51.530 "method": "sock_set_default_impl", 00:39:51.530 "params": { 00:39:51.530 "impl_name": "posix" 00:39:51.530 } 00:39:51.530 }, 00:39:51.530 { 00:39:51.530 "method": "sock_impl_set_options", 00:39:51.530 "params": { 00:39:51.530 "impl_name": "ssl", 00:39:51.530 "recv_buf_size": 4096, 00:39:51.530 "send_buf_size": 4096, 00:39:51.530 "enable_recv_pipe": true, 00:39:51.530 "enable_quickack": false, 00:39:51.530 "enable_placement_id": 0, 00:39:51.530 "enable_zerocopy_send_server": true, 00:39:51.530 "enable_zerocopy_send_client": false, 00:39:51.530 "zerocopy_threshold": 0, 00:39:51.530 "tls_version": 0, 00:39:51.530 "enable_ktls": false 00:39:51.530 } 00:39:51.530 }, 00:39:51.530 { 00:39:51.530 "method": "sock_impl_set_options", 00:39:51.530 "params": { 00:39:51.530 "impl_name": "posix", 00:39:51.530 "recv_buf_size": 2097152, 00:39:51.530 "send_buf_size": 2097152, 00:39:51.530 "enable_recv_pipe": true, 00:39:51.530 "enable_quickack": false, 00:39:51.530 "enable_placement_id": 0, 00:39:51.530 "enable_zerocopy_send_server": true, 00:39:51.530 "enable_zerocopy_send_client": false, 00:39:51.530 "zerocopy_threshold": 0, 00:39:51.530 "tls_version": 0, 00:39:51.530 "enable_ktls": false 00:39:51.530 } 00:39:51.530 } 00:39:51.530 ] 00:39:51.530 }, 00:39:51.530 { 00:39:51.530 "subsystem": "vmd", 00:39:51.530 "config": [] 00:39:51.530 }, 00:39:51.530 { 00:39:51.530 "subsystem": "accel", 00:39:51.530 "config": [ 00:39:51.530 { 00:39:51.530 "method": "accel_set_options", 00:39:51.530 "params": { 00:39:51.530 "small_cache_size": 128, 00:39:51.530 "large_cache_size": 16, 00:39:51.530 "task_count": 2048, 00:39:51.530 "sequence_count": 2048, 00:39:51.530 "buf_count": 2048 00:39:51.530 } 00:39:51.530 } 00:39:51.530 ] 00:39:51.531 }, 00:39:51.531 { 00:39:51.531 "subsystem": "bdev", 00:39:51.531 "config": [ 00:39:51.531 { 00:39:51.531 "method": "bdev_set_options", 00:39:51.531 "params": { 00:39:51.531 "bdev_io_pool_size": 65535, 00:39:51.531 "bdev_io_cache_size": 256, 00:39:51.531 "bdev_auto_examine": true, 00:39:51.531 "iobuf_small_cache_size": 128, 00:39:51.531 "iobuf_large_cache_size": 16 00:39:51.531 } 00:39:51.531 }, 00:39:51.531 { 00:39:51.531 "method": "bdev_raid_set_options", 00:39:51.531 "params": { 00:39:51.531 "process_window_size_kb": 1024 00:39:51.531 } 00:39:51.531 }, 00:39:51.531 { 00:39:51.531 "method": "bdev_iscsi_set_options", 00:39:51.531 "params": { 00:39:51.531 "timeout_sec": 30 00:39:51.531 } 00:39:51.531 }, 00:39:51.531 { 00:39:51.531 "method": "bdev_nvme_set_options", 00:39:51.531 "params": { 00:39:51.531 "action_on_timeout": "none", 00:39:51.531 "timeout_us": 0, 00:39:51.531 "timeout_admin_us": 0, 00:39:51.531 "keep_alive_timeout_ms": 10000, 00:39:51.531 "arbitration_burst": 0, 00:39:51.531 "low_priority_weight": 0, 00:39:51.531 "medium_priority_weight": 0, 00:39:51.531 "high_priority_weight": 0, 00:39:51.531 "nvme_adminq_poll_period_us": 10000, 00:39:51.531 "nvme_ioq_poll_period_us": 0, 00:39:51.531 "io_queue_requests": 512, 00:39:51.531 "delay_cmd_submit": true, 00:39:51.531 "transport_retry_count": 4, 00:39:51.531 "bdev_retry_count": 3, 00:39:51.531 "transport_ack_timeout": 0, 00:39:51.531 "ctrlr_loss_timeout_sec": 0, 00:39:51.531 "reconnect_delay_sec": 0, 00:39:51.531 "fast_io_fail_timeout_sec": 0, 00:39:51.531 "disable_auto_failback": false, 00:39:51.531 "generate_uuids": false, 00:39:51.531 "transport_tos": 0, 00:39:51.531 "nvme_error_stat": false, 00:39:51.531 "rdma_srq_size": 0, 00:39:51.531 "io_path_stat": false, 00:39:51.531 "allow_accel_sequence": false, 00:39:51.531 "rdma_max_cq_size": 0, 00:39:51.531 "rdma_cm_event_timeout_ms": 0, 00:39:51.531 "dhchap_digests": [ 00:39:51.531 "sha256", 00:39:51.531 "sha384", 00:39:51.531 "sha512" 00:39:51.531 ], 00:39:51.531 "dhchap_dhgroups": [ 00:39:51.531 "null", 00:39:51.531 "ffdhe2048", 00:39:51.531 "ffdhe3072", 00:39:51.531 "ffdhe4096", 00:39:51.531 "ffdhe6144", 00:39:51.531 "ffdhe8192" 00:39:51.531 ] 00:39:51.531 } 00:39:51.531 }, 00:39:51.531 { 00:39:51.531 "method": "bdev_nvme_attach_controller", 00:39:51.531 "params": { 00:39:51.531 "name": "nvme0", 00:39:51.531 "trtype": "TCP", 00:39:51.531 "adrfam": "IPv4", 00:39:51.531 "traddr": "127.0.0.1", 00:39:51.531 "trsvcid": "4420", 00:39:51.531 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:51.531 "prchk_reftag": false, 00:39:51.531 "prchk_guard": false, 00:39:51.531 "ctrlr_loss_timeout_sec": 0, 00:39:51.531 "reconnect_delay_sec": 0, 00:39:51.531 "fast_io_fail_timeout_sec": 0, 00:39:51.531 "psk": "key0", 00:39:51.531 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:51.531 "hdgst": false, 00:39:51.531 "ddgst": false 00:39:51.531 } 00:39:51.531 }, 00:39:51.531 { 00:39:51.531 "method": "bdev_nvme_set_hotplug", 00:39:51.531 "params": { 00:39:51.531 "period_us": 100000, 00:39:51.531 "enable": false 00:39:51.531 } 00:39:51.531 }, 00:39:51.531 { 00:39:51.531 "method": "bdev_wait_for_examine" 00:39:51.531 } 00:39:51.531 ] 00:39:51.531 }, 00:39:51.531 { 00:39:51.531 "subsystem": "nbd", 00:39:51.531 "config": [] 00:39:51.531 } 00:39:51.531 ] 00:39:51.531 }' 00:39:51.531 14:42:00 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:51.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:51.531 14:42:00 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:51.531 14:42:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:51.531 [2024-07-10 14:42:00.969846] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:39:51.531 [2024-07-10 14:42:00.970002] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1585365 ] 00:39:51.788 EAL: No free 2048 kB hugepages reported on node 1 00:39:51.788 [2024-07-10 14:42:01.096509] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:52.045 [2024-07-10 14:42:01.326494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:52.303 [2024-07-10 14:42:01.742997] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:52.561 14:42:01 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:52.561 14:42:01 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:39:52.561 14:42:01 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:39:52.562 14:42:01 keyring_file -- keyring/file.sh@120 -- # jq length 00:39:52.562 14:42:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:52.819 14:42:02 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:39:52.819 14:42:02 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:39:52.819 14:42:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:52.819 14:42:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:52.819 14:42:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:52.819 14:42:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:52.819 14:42:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:53.076 14:42:02 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:53.076 14:42:02 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:39:53.076 14:42:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:53.076 14:42:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:53.076 14:42:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:53.076 14:42:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:53.076 14:42:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:53.335 14:42:02 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:39:53.335 14:42:02 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:39:53.335 14:42:02 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:39:53.335 14:42:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:53.593 14:42:02 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:39:53.593 14:42:02 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:53.593 14:42:02 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.4kTwjbVwiZ /tmp/tmp.Jaae3OifPg 00:39:53.593 14:42:02 keyring_file -- keyring/file.sh@20 -- # killprocess 1585365 00:39:53.593 14:42:02 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1585365 ']' 00:39:53.593 14:42:02 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1585365 00:39:53.593 14:42:02 keyring_file -- common/autotest_common.sh@953 -- # uname 00:39:53.593 14:42:02 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:53.593 14:42:02 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1585365 00:39:53.593 14:42:02 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:39:53.593 14:42:02 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:39:53.593 14:42:02 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1585365' 00:39:53.593 killing process with pid 1585365 00:39:53.593 14:42:02 keyring_file -- common/autotest_common.sh@967 -- # kill 1585365 00:39:53.593 Received shutdown signal, test time was about 1.000000 seconds 00:39:53.593 00:39:53.593 Latency(us) 00:39:53.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:53.593 =================================================================================================================== 00:39:53.593 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:53.593 14:42:02 keyring_file -- common/autotest_common.sh@972 -- # wait 1585365 00:39:54.527 14:42:03 keyring_file -- keyring/file.sh@21 -- # killprocess 1583576 00:39:54.527 14:42:03 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1583576 ']' 00:39:54.527 14:42:03 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1583576 00:39:54.527 14:42:03 keyring_file -- common/autotest_common.sh@953 -- # uname 00:39:54.527 14:42:03 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:54.527 14:42:03 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1583576 00:39:54.527 14:42:04 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:54.527 14:42:04 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:54.527 14:42:04 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1583576' 00:39:54.527 killing process with pid 1583576 00:39:54.527 14:42:04 keyring_file -- common/autotest_common.sh@967 -- # kill 1583576 00:39:54.527 [2024-07-10 14:42:04.006676] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for 14:42:04 keyring_file -- common/autotest_common.sh@972 -- # wait 1583576 00:39:54.527 removal in v24.09 hit 1 times 00:39:57.053 00:39:57.053 real 0m19.391s 00:39:57.053 user 0m42.455s 00:39:57.053 sys 0m3.728s 00:39:57.053 14:42:06 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:57.053 14:42:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:57.053 ************************************ 00:39:57.053 END TEST keyring_file 00:39:57.053 ************************************ 00:39:57.053 14:42:06 -- common/autotest_common.sh@1142 -- # return 0 00:39:57.053 14:42:06 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:39:57.053 14:42:06 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:57.053 14:42:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:57.053 14:42:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:57.053 14:42:06 -- common/autotest_common.sh@10 -- # set +x 00:39:57.312 ************************************ 00:39:57.312 START TEST keyring_linux 00:39:57.312 ************************************ 00:39:57.312 14:42:06 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:57.312 * Looking for test storage... 00:39:57.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:57.312 14:42:06 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:57.312 14:42:06 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:57.312 14:42:06 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:57.312 14:42:06 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:57.312 14:42:06 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:57.312 14:42:06 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.312 14:42:06 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.312 14:42:06 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.312 14:42:06 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:57.312 14:42:06 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:57.312 14:42:06 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:57.312 14:42:06 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:57.312 14:42:06 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:57.312 14:42:06 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:57.312 14:42:06 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:57.312 14:42:06 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:57.312 14:42:06 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:57.312 14:42:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:57.312 14:42:06 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:57.312 14:42:06 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:57.312 14:42:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:57.312 14:42:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:57.312 14:42:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@705 -- # python - 00:39:57.312 14:42:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:57.312 14:42:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:57.312 /tmp/:spdk-test:key0 00:39:57.312 14:42:06 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:57.312 14:42:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:57.312 14:42:06 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:57.312 14:42:06 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:57.312 14:42:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:57.312 14:42:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:57.312 14:42:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:39:57.312 14:42:06 keyring_linux -- nvmf/common.sh@705 -- # python - 00:39:57.312 14:42:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:57.312 14:42:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:57.312 /tmp/:spdk-test:key1 00:39:57.312 14:42:06 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1586177 00:39:57.312 14:42:06 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:57.312 14:42:06 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1586177 00:39:57.312 14:42:06 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1586177 ']' 00:39:57.312 14:42:06 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:57.313 14:42:06 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:57.313 14:42:06 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:57.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:57.313 14:42:06 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:57.313 14:42:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:57.571 [2024-07-10 14:42:06.796051] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:39:57.571 [2024-07-10 14:42:06.796215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1586177 ] 00:39:57.571 EAL: No free 2048 kB hugepages reported on node 1 00:39:57.571 [2024-07-10 14:42:06.919108] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:57.829 [2024-07-10 14:42:07.169691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:58.765 14:42:07 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:58.765 14:42:07 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:39:58.765 14:42:07 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:58.765 14:42:07 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:58.765 14:42:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:58.765 [2024-07-10 14:42:07.968052] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:58.765 null0 00:39:58.765 [2024-07-10 14:42:08.000053] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:58.765 [2024-07-10 14:42:08.000627] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:58.765 14:42:08 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:58.765 14:42:08 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:58.765 467530296 00:39:58.765 14:42:08 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:58.765 269077828 00:39:58.765 14:42:08 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1586597 00:39:58.765 14:42:08 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1586597 /var/tmp/bperf.sock 00:39:58.765 14:42:08 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:58.765 14:42:08 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1586597 ']' 00:39:58.765 14:42:08 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:58.765 14:42:08 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:58.765 14:42:08 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:58.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:58.765 14:42:08 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:58.765 14:42:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:58.765 [2024-07-10 14:42:08.101735] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:39:58.765 [2024-07-10 14:42:08.101898] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1586597 ] 00:39:58.765 EAL: No free 2048 kB hugepages reported on node 1 00:39:58.765 [2024-07-10 14:42:08.232448] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:59.023 [2024-07-10 14:42:08.483946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:59.595 14:42:09 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:59.595 14:42:09 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:39:59.595 14:42:09 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:59.595 14:42:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:59.853 14:42:09 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:59.853 14:42:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:00.418 14:42:09 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:00.418 14:42:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:00.676 [2024-07-10 14:42:10.108194] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:00.933 nvme0n1 00:40:00.933 14:42:10 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:40:00.933 14:42:10 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:40:00.933 14:42:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:00.933 14:42:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:00.933 14:42:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:00.933 14:42:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:01.190 14:42:10 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:40:01.190 14:42:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:01.190 14:42:10 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:40:01.190 14:42:10 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:40:01.190 14:42:10 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:01.190 14:42:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:01.190 14:42:10 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:40:01.447 14:42:10 keyring_linux -- keyring/linux.sh@25 -- # sn=467530296 00:40:01.447 14:42:10 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:40:01.447 14:42:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:01.447 14:42:10 keyring_linux -- keyring/linux.sh@26 -- # [[ 467530296 == \4\6\7\5\3\0\2\9\6 ]] 00:40:01.447 14:42:10 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 467530296 00:40:01.447 14:42:10 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:40:01.447 14:42:10 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:01.447 Running I/O for 1 seconds... 00:40:02.379 00:40:02.379 Latency(us) 00:40:02.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:02.379 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:02.379 nvme0n1 : 1.03 3496.49 13.66 0.00 0.00 36161.75 7912.87 41166.32 00:40:02.379 =================================================================================================================== 00:40:02.379 Total : 3496.49 13.66 0.00 0.00 36161.75 7912.87 41166.32 00:40:02.379 0 00:40:02.379 14:42:11 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:02.379 14:42:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:02.636 14:42:12 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:40:02.636 14:42:12 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:40:02.636 14:42:12 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:02.636 14:42:12 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:02.636 14:42:12 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:02.636 14:42:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:02.894 14:42:12 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:40:02.894 14:42:12 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:02.894 14:42:12 keyring_linux -- keyring/linux.sh@23 -- # return 00:40:02.894 14:42:12 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:02.894 14:42:12 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:40:02.894 14:42:12 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:02.894 14:42:12 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:02.894 14:42:12 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:02.894 14:42:12 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:02.894 14:42:12 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:02.894 14:42:12 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:02.894 14:42:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:03.152 [2024-07-10 14:42:12.590118] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:03.152 [2024-07-10 14:42:12.591026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7000 (107): Transport endpoint is not connected 00:40:03.152 [2024-07-10 14:42:12.591990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7000 (9): Bad file descriptor 00:40:03.152 [2024-07-10 14:42:12.592985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:03.152 [2024-07-10 14:42:12.593025] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:03.152 [2024-07-10 14:42:12.593048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:03.152 request: 00:40:03.152 { 00:40:03.152 "name": "nvme0", 00:40:03.152 "trtype": "tcp", 00:40:03.152 "traddr": "127.0.0.1", 00:40:03.152 "adrfam": "ipv4", 00:40:03.152 "trsvcid": "4420", 00:40:03.152 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:03.152 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:03.152 "prchk_reftag": false, 00:40:03.152 "prchk_guard": false, 00:40:03.152 "hdgst": false, 00:40:03.152 "ddgst": false, 00:40:03.152 "psk": ":spdk-test:key1", 00:40:03.152 "method": "bdev_nvme_attach_controller", 00:40:03.152 "req_id": 1 00:40:03.152 } 00:40:03.152 Got JSON-RPC error response 00:40:03.152 response: 00:40:03.152 { 00:40:03.152 "code": -5, 00:40:03.152 "message": "Input/output error" 00:40:03.152 } 00:40:03.152 14:42:12 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:40:03.152 14:42:12 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:03.152 14:42:12 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:03.152 14:42:12 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:03.152 14:42:12 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:40:03.152 14:42:12 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:03.152 14:42:12 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:40:03.152 14:42:12 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:40:03.152 14:42:12 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:40:03.152 14:42:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:03.152 14:42:12 keyring_linux -- keyring/linux.sh@33 -- # sn=467530296 00:40:03.152 14:42:12 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 467530296 00:40:03.152 1 links removed 00:40:03.152 14:42:12 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:03.152 14:42:12 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:40:03.152 14:42:12 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:40:03.152 14:42:12 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:40:03.152 14:42:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:40:03.152 14:42:12 keyring_linux -- keyring/linux.sh@33 -- # sn=269077828 00:40:03.152 14:42:12 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 269077828 00:40:03.152 1 links removed 00:40:03.152 14:42:12 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1586597 00:40:03.152 14:42:12 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1586597 ']' 00:40:03.152 14:42:12 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1586597 00:40:03.152 14:42:12 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:40:03.152 14:42:12 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:03.152 14:42:12 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1586597 00:40:03.410 14:42:12 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:03.410 14:42:12 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:03.410 14:42:12 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1586597' 00:40:03.410 killing process with pid 1586597 00:40:03.410 14:42:12 keyring_linux -- common/autotest_common.sh@967 -- # kill 1586597 00:40:03.410 Received shutdown signal, test time was about 1.000000 seconds 00:40:03.410 00:40:03.410 Latency(us) 00:40:03.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:03.410 =================================================================================================================== 00:40:03.410 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:03.410 14:42:12 keyring_linux -- common/autotest_common.sh@972 -- # wait 1586597 00:40:04.343 14:42:13 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1586177 00:40:04.343 14:42:13 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1586177 ']' 00:40:04.343 14:42:13 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1586177 00:40:04.343 14:42:13 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:40:04.343 14:42:13 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:04.343 14:42:13 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1586177 00:40:04.343 14:42:13 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:04.343 14:42:13 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:04.343 14:42:13 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1586177' 00:40:04.343 killing process with pid 1586177 00:40:04.343 14:42:13 keyring_linux -- common/autotest_common.sh@967 -- # kill 1586177 00:40:04.343 14:42:13 keyring_linux -- common/autotest_common.sh@972 -- # wait 1586177 00:40:06.874 00:40:06.874 real 0m9.461s 00:40:06.874 user 0m15.734s 00:40:06.874 sys 0m1.934s 00:40:06.874 14:42:16 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:06.874 14:42:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:06.874 ************************************ 00:40:06.874 END TEST keyring_linux 00:40:06.874 ************************************ 00:40:06.874 14:42:16 -- common/autotest_common.sh@1142 -- # return 0 00:40:06.874 14:42:16 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:40:06.874 14:42:16 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:40:06.874 14:42:16 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:40:06.874 14:42:16 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:40:06.874 14:42:16 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:40:06.874 14:42:16 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:40:06.874 14:42:16 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:40:06.874 14:42:16 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:40:06.874 14:42:16 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:40:06.874 14:42:16 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:40:06.874 14:42:16 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:40:06.874 14:42:16 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:40:06.874 14:42:16 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:40:06.874 14:42:16 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:40:06.874 14:42:16 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:40:06.874 14:42:16 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:40:06.874 14:42:16 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:40:06.874 14:42:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:06.874 14:42:16 -- common/autotest_common.sh@10 -- # set +x 00:40:06.874 14:42:16 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:40:06.874 14:42:16 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:40:06.874 14:42:16 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:40:06.874 14:42:16 -- common/autotest_common.sh@10 -- # set +x 00:40:08.774 INFO: APP EXITING 00:40:08.774 INFO: killing all VMs 00:40:08.774 INFO: killing vhost app 00:40:08.774 INFO: EXIT DONE 00:40:09.340 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:40:09.340 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:40:09.340 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:40:09.340 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:40:09.597 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:40:09.597 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:40:09.597 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:40:09.598 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:40:09.598 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:40:09.598 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:40:09.598 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:40:09.598 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:40:09.598 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:40:09.598 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:40:09.598 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:40:09.598 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:40:09.598 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:40:10.971 Cleaning 00:40:10.971 Removing: /var/run/dpdk/spdk0/config 00:40:10.971 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:10.971 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:10.972 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:10.972 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:10.972 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:10.972 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:10.972 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:10.972 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:10.972 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:10.972 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:10.972 Removing: /var/run/dpdk/spdk1/config 00:40:10.972 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:10.972 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:10.972 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:10.972 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:10.972 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:10.972 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:10.972 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:10.972 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:10.972 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:10.972 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:10.972 Removing: /var/run/dpdk/spdk1/mp_socket 00:40:10.972 Removing: /var/run/dpdk/spdk2/config 00:40:10.972 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:10.972 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:10.972 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:10.972 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:10.972 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:10.972 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:10.972 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:10.972 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:10.972 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:10.972 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:10.972 Removing: /var/run/dpdk/spdk3/config 00:40:10.972 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:10.972 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:10.972 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:10.972 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:10.972 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:10.972 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:10.972 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:10.972 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:10.972 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:10.972 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:10.972 Removing: /var/run/dpdk/spdk4/config 00:40:10.972 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:10.972 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:10.972 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:10.972 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:10.972 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:10.972 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:10.972 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:10.972 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:10.972 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:10.972 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:10.972 Removing: /dev/shm/bdev_svc_trace.1 00:40:10.972 Removing: /dev/shm/nvmf_trace.0 00:40:10.972 Removing: /dev/shm/spdk_tgt_trace.pid1237765 00:40:10.972 Removing: /var/run/dpdk/spdk0 00:40:10.972 Removing: /var/run/dpdk/spdk1 00:40:10.972 Removing: /var/run/dpdk/spdk2 00:40:10.972 Removing: /var/run/dpdk/spdk3 00:40:10.972 Removing: /var/run/dpdk/spdk4 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1234382 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1236015 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1237765 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1238478 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1239427 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1239963 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1240830 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1241086 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1241606 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1243063 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1244247 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1244830 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1245325 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1245895 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1246481 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1246697 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1246928 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1247238 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1247681 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1250310 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1250867 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1251398 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1251567 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1252930 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1253069 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1254380 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1254566 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1255000 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1255142 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1255578 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1255716 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1256752 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1257033 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1257422 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1257913 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1258191 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1258870 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1259304 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1259693 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1260010 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1260300 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1260711 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1261003 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1261412 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1261704 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1261999 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1262406 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1262696 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1263109 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1263412 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1263814 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1264109 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1264405 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1264819 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1265111 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1265520 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1265819 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1266145 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1266758 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1269240 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1325208 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1327965 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1335029 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1338461 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1341073 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1341479 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1345747 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1352130 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1352439 00:40:10.972 Removing: /var/run/dpdk/spdk_pid1355326 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1359178 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1361502 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1368658 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1374245 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1375569 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1376489 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1388095 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1390583 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1416490 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1419546 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1420719 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1422176 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1422446 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1422718 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1422995 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1423825 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1425273 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1426539 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1427233 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1429110 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1429933 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1430762 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1433534 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1437808 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1441337 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1465882 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1468898 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1472926 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1474501 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1476161 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1479213 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1481845 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1486472 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1486579 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1489608 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1489748 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1490005 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1490284 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1490409 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1491503 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1492792 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1494080 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1495763 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1497059 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1498236 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1502167 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1502612 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1503886 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1504756 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1508725 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1510847 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1514540 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1518115 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1525336 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1530069 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1530075 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1542654 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1543319 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1543984 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1544627 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1545629 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1546171 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1546838 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1547384 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1550257 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1550534 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1554589 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1554885 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1557242 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1562562 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1562683 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1565710 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1567224 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1568753 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1569650 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1571144 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1572134 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1577782 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1578178 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1578567 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1580453 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1580742 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1581140 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1583576 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1583717 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1585365 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1586177 00:40:11.231 Removing: /var/run/dpdk/spdk_pid1586597 00:40:11.231 Clean 00:40:11.231 14:42:20 -- common/autotest_common.sh@1451 -- # return 0 00:40:11.231 14:42:20 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:40:11.231 14:42:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:11.231 14:42:20 -- common/autotest_common.sh@10 -- # set +x 00:40:11.489 14:42:20 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:40:11.490 14:42:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:11.490 14:42:20 -- common/autotest_common.sh@10 -- # set +x 00:40:11.490 14:42:20 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:11.490 14:42:20 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:11.490 14:42:20 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:11.490 14:42:20 -- spdk/autotest.sh@391 -- # hash lcov 00:40:11.490 14:42:20 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:40:11.490 14:42:20 -- spdk/autotest.sh@393 -- # hostname 00:40:11.490 14:42:20 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:11.490 geninfo: WARNING: invalid characters removed from testname! 00:40:38.038 14:42:47 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:42.250 14:42:51 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:44.781 14:42:53 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:47.314 14:42:56 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:50.604 14:42:59 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:53.144 14:43:02 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:55.683 14:43:05 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:55.942 14:43:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:55.942 14:43:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:40:55.942 14:43:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:55.942 14:43:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:55.942 14:43:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.942 14:43:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.942 14:43:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.942 14:43:05 -- paths/export.sh@5 -- $ export PATH 00:40:55.942 14:43:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.942 14:43:05 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:40:55.942 14:43:05 -- common/autobuild_common.sh@444 -- $ date +%s 00:40:55.942 14:43:05 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720615385.XXXXXX 00:40:55.942 14:43:05 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720615385.lSnxPv 00:40:55.942 14:43:05 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:40:55.942 14:43:05 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:40:55.942 14:43:05 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:40:55.942 14:43:05 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:40:55.942 14:43:05 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:40:55.942 14:43:05 -- common/autobuild_common.sh@460 -- $ get_config_params 00:40:55.942 14:43:05 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:40:55.942 14:43:05 -- common/autotest_common.sh@10 -- $ set +x 00:40:55.942 14:43:05 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:40:55.942 14:43:05 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:40:55.942 14:43:05 -- pm/common@17 -- $ local monitor 00:40:55.942 14:43:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:55.942 14:43:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:55.942 14:43:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:55.942 14:43:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:55.942 14:43:05 -- pm/common@21 -- $ date +%s 00:40:55.942 14:43:05 -- pm/common@21 -- $ date +%s 00:40:55.942 14:43:05 -- pm/common@25 -- $ sleep 1 00:40:55.942 14:43:05 -- pm/common@21 -- $ date +%s 00:40:55.942 14:43:05 -- pm/common@21 -- $ date +%s 00:40:55.942 14:43:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720615385 00:40:55.942 14:43:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720615385 00:40:55.942 14:43:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720615385 00:40:55.942 14:43:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720615385 00:40:55.942 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720615385_collect-vmstat.pm.log 00:40:55.942 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720615385_collect-cpu-load.pm.log 00:40:55.942 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720615385_collect-cpu-temp.pm.log 00:40:55.942 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720615385_collect-bmc-pm.bmc.pm.log 00:40:56.878 14:43:06 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:40:56.878 14:43:06 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:40:56.878 14:43:06 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:56.878 14:43:06 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:40:56.878 14:43:06 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:40:56.878 14:43:06 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:40:56.878 14:43:06 -- spdk/autopackage.sh@19 -- $ timing_finish 00:40:56.878 14:43:06 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:56.878 14:43:06 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:40:56.878 14:43:06 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:56.878 14:43:06 -- spdk/autopackage.sh@20 -- $ exit 0 00:40:56.878 14:43:06 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:40:56.878 14:43:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:40:56.878 14:43:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:40:56.878 14:43:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:56.878 14:43:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:40:56.878 14:43:06 -- pm/common@44 -- $ pid=1599302 00:40:56.878 14:43:06 -- pm/common@50 -- $ kill -TERM 1599302 00:40:56.878 14:43:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:56.878 14:43:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:40:56.878 14:43:06 -- pm/common@44 -- $ pid=1599304 00:40:56.878 14:43:06 -- pm/common@50 -- $ kill -TERM 1599304 00:40:56.878 14:43:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:56.878 14:43:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:40:56.878 14:43:06 -- pm/common@44 -- $ pid=1599306 00:40:56.878 14:43:06 -- pm/common@50 -- $ kill -TERM 1599306 00:40:56.878 14:43:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:56.878 14:43:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:40:56.878 14:43:06 -- pm/common@44 -- $ pid=1599334 00:40:56.878 14:43:06 -- pm/common@50 -- $ sudo -E kill -TERM 1599334 00:40:56.878 + [[ -n 1148064 ]] 00:40:56.878 + sudo kill 1148064 00:40:56.889 [Pipeline] } 00:40:56.908 [Pipeline] // stage 00:40:56.914 [Pipeline] } 00:40:56.933 [Pipeline] // timeout 00:40:56.937 [Pipeline] } 00:40:56.951 [Pipeline] // catchError 00:40:56.956 [Pipeline] } 00:40:56.972 [Pipeline] // wrap 00:40:56.977 [Pipeline] } 00:40:56.994 [Pipeline] // catchError 00:40:57.003 [Pipeline] stage 00:40:57.006 [Pipeline] { (Epilogue) 00:40:57.022 [Pipeline] catchError 00:40:57.023 [Pipeline] { 00:40:57.038 [Pipeline] echo 00:40:57.040 Cleanup processes 00:40:57.045 [Pipeline] sh 00:40:57.326 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:57.326 1599432 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:40:57.326 1599566 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:57.340 [Pipeline] sh 00:40:57.620 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:57.621 ++ awk '{print $1}' 00:40:57.621 ++ grep -v 'sudo pgrep' 00:40:57.621 + sudo kill -9 1599432 00:40:57.632 [Pipeline] sh 00:40:57.913 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:07.944 [Pipeline] sh 00:41:08.221 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:08.221 Artifacts sizes are good 00:41:08.237 [Pipeline] archiveArtifacts 00:41:08.244 Archiving artifacts 00:41:08.468 [Pipeline] sh 00:41:08.774 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:41:08.793 [Pipeline] cleanWs 00:41:08.804 [WS-CLEANUP] Deleting project workspace... 00:41:08.804 [WS-CLEANUP] Deferred wipeout is used... 00:41:08.811 [WS-CLEANUP] done 00:41:08.813 [Pipeline] } 00:41:08.833 [Pipeline] // catchError 00:41:08.846 [Pipeline] sh 00:41:09.125 + logger -p user.info -t JENKINS-CI 00:41:09.133 [Pipeline] } 00:41:09.149 [Pipeline] // stage 00:41:09.154 [Pipeline] } 00:41:09.172 [Pipeline] // node 00:41:09.177 [Pipeline] End of Pipeline 00:41:09.209 Finished: SUCCESS